$ traceroute -m 120 xmas.futile.net
traceroute to xmas.futile.net (77.75.106.106), 120 hops max, 60 byte packets
1 ec2-50-112-0-86.us-west-2.compute.amazonaws.com (50.112.0.86) 1.700 ms ec2-50-112-0-82.us-west-2.compute.amazonaws.com (50.112.0.82) 2.160 ms 2.121 ms
2 100.64.1.143 (100.64.1.143) 1.287 ms 100.64.1.157 (100.64.1.157) 1.850 ms 100.64.1.179 (100.64.1.179) 1.514 ms
3 100.64.0.114 (100.64.0.114) 1.697 ms 100.64.0.56 (100.64.0.56) 1.431 ms 100.64.0.66 (100.64.0.66) 1.637 ms
4 100.64.16.89 (100.64.16.89) 0.816 ms 100.64.16.29 (100.64.16.29) 0.870 ms 100.64.16.153 (100.64.16.153) 0.929 ms
5 205.251.232.166 (205.251.232.166) 3.181 ms 2.279 ms 54.239.48.184 (54.239.48.184) 1.175 ms
6 205.251.232.154 (205.251.232.154) 1.731 ms 1.315 ms 205.251.232.198 (205.251.232.198) 1.409 ms
7 205.251.232.89 (205.251.232.89) 7.241 ms 205.251.232.91 (205.251.232.91) 8.558 ms 205.251.232.78 (205.251.232.78) 7.016 ms
8 205.251.225.201 (205.251.225.201) 7.385 ms 205.251.226.226 (205.251.226.226) 6.461 ms 205.251.226.192 (205.251.226.192) 5.849 ms
9 ae-8.r04.sttlwa01.us.bb.gin.ntt.net (198.104.202.189) 8.910 ms ae-18.r05.sttlwa01.us.bb.gin.ntt.net (129.250.201.177) 7.586 ms ae-8.r04.sttlwa01.us.bb.gin.ntt.net (198.104.202.189) 8.911 ms
10 ae-7.r21.sttlwa01.us.bb.gin.ntt.net (129.250.5.48) 7.377 ms ae-6.r21.sttlwa01.us.bb.gin.ntt.net (129.250.5.44) 6.058 ms 8.077 ms
11 ae-3.r22.nycmny01.us.bb.gin.ntt.net (129.250.2.50) 73.357 ms 74.689 ms 71.985 ms
12 ae-5.r22.londen03.uk.bb.gin.ntt.net (129.250.3.127) 145.012 ms 153.988 ms 154.003 ms
13 ae-1.r02.londen03.uk.bb.gin.ntt.net (129.250.5.25) 142.310 ms 143.254 ms 144.756 ms
14 xe-0-0-3.edge00.the.uk.hso-group.net (62.73.169.34) 155.629 ms 153.431 ms 170.395 ms
15 xe-3-3.core00.the.uk.hso-group.net (93.89.91.13) 154.304 ms 145.935 ms 155.607 ms
16 xe-4-4.core00.thw.uk.hso-group.net (77.75.108.135) 161.489 ms 170.428 ms 168.645 ms
17 xe-8-4.core00.thn.uk.hso-group.net (77.75.108.137) 145.769 ms 151.782 ms 150.956 ms
18 xe-4-4.core00.gs1.uk.hso-group.net (77.75.108.160) 155.492 ms 154.586 ms 153.756 ms
19 ae0-1203.edge00.sov.uk.hso-group.net (46.17.60.117) 162.573 ms 145.314 ms 156.443 ms
20 xoxoxoxoxoxo.Ho.Ho.Ho.xoxoxoxoxoxo (93.89.84.75) 152.557 ms 148.137 ms 149.111 ms
21 ooooxooooooxooo.V.ooooooxooooxoooo (82.133.91.37) 148.166 ms 158.433 ms 157.012 ms
22 ooxoooooxooooo.MMM.ooooooooxxoooxo (82.133.91.18) 146.747 ms 150.743 ms 148.490 ms
23 oooxoooooxooo.EEEEE.oooxoooooxoooo (82.133.91.63) 155.556 ms 150.612 ms 149.207 ms
24 ooooxooxooox.RRRRRRR.ooooooxooooox (82.133.91.56) 150.052 ms 147.187 ms 147.717 ms
25 oxooooooxoo.RRRRRRRRR.oooxooooooxo (82.133.91.55) 146.850 ms 157.273 ms 163.567 ms
26 xoooxooooo.YYYYYYYYYYY.oooxooooxoo (82.133.91.58) 146.490 ms 155.792 ms 155.358 ms
27 ooxoooooxooooo.CCC.ooooooooxoooxoo (82.133.91.96) 147.694 ms 146.370 ms 146.917 ms
28 oooooxooo.HHHHHHHHHHHHH.oxoooxoooo (82.133.91.23) 156.264 ms 157.224 ms 147.976 ms
29 ooxooxoo.RRRRRRRRRRRRRRR.ooxoooxoo (82.133.91.49) 149.529 ms 157.479 ms 146.768 ms
30 oxoooxo.IIIIIIIIIIIIIIIII.oooxooxo (82.133.91.60) 147.004 ms 149.610 ms 146.144 ms
31 oooxoo.SSSSSSSSSSSSSSSSSSS.ooxoooo (82.133.91.42) 159.816 ms 147.422 ms 147.647 ms
32 oooxoooxoooooo.TTT.ooooooooooooxoo (82.133.91.61) 146.592 ms 150.407 ms 147.971 ms
33 ooxoo.MMMMMMMMMMMMMMMMMMMMMM.oooxo (82.133.91.34) 158.860 ms 157.308 ms 146.151 ms
34 xxoo.AAAAAAAAAAAAAAAAAAAAAAAA.oxoo (82.133.91.80) 157.832 ms 154.396 ms 147.990 ms
35 oxo.SSSSSSSSSSSSSSSSSSSSSSSSSS.ooo (82.133.91.40) 147.656 ms 157.133 ms 146.486 ms
36 ooxooooooooooo.XXX.oooooooooooooxo (82.133.91.35) 156.158 ms 147.422 ms 148.664 ms
37 oxoooooooooooo.XXX.ooooooooooooxxo (82.133.91.10) 156.621 ms 156.882 ms 149.265 ms
38 Oh.the.weather.outside.is.frightful (82.133.91.41) 156.463 ms 159.244 ms 149.237 ms
39 But.the.fire.is.so.delightful (82.133.91.19) 155.633 ms 155.197 ms 148.495 ms
40 And.since.weve.no.place.to.go (82.133.91.77) 163.198 ms 148.568 ms 157.004 ms
41 Let.It.Snow.Let.It.Snow.Let.It.Snow (82.133.91.43) 145.524 ms 148.442 ms 148.235 ms
42 xXx (82.133.91.24) 154.886 ms 156.168 ms 148.119 ms
43 It.doesnt.show.signs.of.stopping (82.133.91.36) 155.728 ms 149.842 ms 147.509 ms
44 And.Ive.bought.some.corn.for.popping (82.133.91.73) 156.513 ms 155.195 ms 148.890 ms
45 The.lights.are.turned.way.down.low (82.133.91.76) 156.784 ms 156.929 ms 149.023 ms
46 Let.It.Snow.Let.It.Snow.Let.It.Snow (82.133.91.67) 157.132 ms 147.427 ms 146.847 ms
47 xXx (82.133.91.38) 155.047 ms 157.809 ms 158.599 ms
48 When.we.finally.kiss.good.night (82.133.91.62) 148.095 ms 156.498 ms 155.979 ms
49 How.Ill.hate.going.out.in.the.storm (82.133.91.45) 144.783 ms 153.313 ms 154.611 ms
50 But.if.youll.really.hold.me.tight (82.133.91.78) 145.371 ms 147.084 ms 154.588 ms
51 All.the.way.home.Ill.be.warm (82.133.91.17) 148.071 ms 148.590 ms 147.392 ms
52 xXx (82.133.91.70) 151.266 ms 148.426 ms 146.638 ms
53 The.fire.is.slowly.dying (82.133.91.95) 156.077 ms 155.992 ms 157.804 ms
54 And.my.dear.were.still.goodbying (82.133.91.57) 149.232 ms 157.089 ms 155.868 ms
55 But.as.long.as.you.love.me.so (82.133.91.31) 147.339 ms 148.318 ms 156.999 ms
56 Let.It.Snow.Let.It.Snow.Let.It.Snow (82.133.91.53) 153.937 ms 155.810 ms 147.369 ms
57 oOo (82.133.91.94) 162.711 ms 148.860 ms 154.739 ms
58 Ho.Ho.Ho.Are.We.Having.Fun.Yet (82.133.91.64) 148.437 ms 156.270 ms 163.520 ms
59 M.E.R.R.Y.C.H.R.I.S.T.M.A.S (82.133.91.86) 153.629 ms 146.195 ms 154.195 ms
60 oOo (82.133.91.15) 156.081 ms 145.935 ms 155.280 ms
61 Dashing.through.the.snow (82.133.91.14) 146.503 ms 150.094 ms 155.717 ms
62 In.a.one-horse.open.sleigh (82.133.91.83) 156.842 ms 156.586 ms 159.232 ms
63 Over.the.fields.we.go (82.133.91.27) 155.382 ms 146.111 ms 146.592 ms
64 Laughing.all.the.way (82.133.91.71) 156.139 ms 164.653 ms 148.819 ms
65 Bells.on.bobtail.ring (82.133.91.79) 156.060 ms 153.467 ms 147.566 ms
66 Making.spirits.bright (82.133.91.75) 154.545 ms 146.811 ms 164.212 ms
67 What.fun.it.is.to.ride.and.sing (82.133.91.82) 146.802 ms 156.057 ms 157.627 ms
68 A.sleighing.song.tonight (82.133.91.98) 147.468 ms 148.048 ms 148.202 ms
69 oOo (82.133.91.29) 169.488 ms 156.140 ms 146.594 ms
70 Jingle.bells.jingle.bells (82.133.91.91) 149.405 ms 147.309 ms 149.885 ms
71 Jingle.all.the.way (82.133.91.81) 147.528 ms 152.163 ms 162.643 ms
72 Oh.what.fun.it.is.to.ride (82.133.91.21) 148.221 ms 148.672 ms 155.447 ms
73 In.a.one-horse.open.sleigh (82.133.91.30) 145.895 ms 150.218 ms 150.723 ms
74 Jingle.bells.jingle.bells (82.133.91.59) 157.465 ms 157.238 ms 148.593 ms
75 Jingle.all.the.way (82.133.91.32) 149.462 ms 148.965 ms 150.207 ms
76 Oh.what.fun.it.is.to.ride (82.133.91.74) 156.425 ms 153.923 ms 148.276 ms
77 In.a.one-horse.open.sleigh (82.133.91.87) 146.874 ms 156.538 ms 149.652 ms
78 M-E-R-R-Y--C-H-R-I-S-T-M-A-S (82.133.91.72) 145.625 ms 156.200 ms 146.071 ms
79 Have.yourself.a.merry.little.Christmas (82.133.91.90) 155.789 ms 147.145 ms 156.051 ms
80 Let.your.heart.be.light (82.133.91.47) 148.308 ms 153.645 ms 147.440 ms
81 From.now.on (82.133.91.12) 148.154 ms 146.651 ms 149.013 ms
82 our.troubles.will.be.out.of.sight (82.133.91.99) 155.112 ms 154.470 ms 155.005 ms
83 oOo (82.133.91.22) 148.033 ms 150.713 ms 150.175 ms
84 Have.yourself.a.merry.little.Christmas (82.133.91.68) 148.183 ms 163.155 ms 155.142 ms
85 Make.the.Yule-tide.gay (82.133.91.52) 146.391 ms 144.659 ms 147.140 ms
86 From.now.on (82.133.91.66) 145.505 ms 146.407 ms 145.916 ms
87 our.troubles.will.be.miles.away (82.133.91.54) 148.138 ms 146.840 ms 153.899 ms
88 oOo (82.133.91.93) 162.987 ms 147.289 ms 156.463 ms
89 Here.we.are.as.in.olden.days (82.133.91.25) 156.433 ms 147.995 ms 157.982 ms
90 Happy.golden.days.of.yore (82.133.91.89) 155.275 ms 149.958 ms 155.338 ms
91 Faithful.friends.who.are.dear.to.us (82.133.91.46) 153.740 ms 146.164 ms 147.598 ms
92 Gather.near.to.us.once.more (82.133.91.69) 156.085 ms 154.181 ms 146.960 ms
93 oOo (82.133.91.85) 154.229 ms 145.904 ms 154.842 ms
94 Through.the.years (82.133.91.39) 159.886 ms 158.502 ms 148.180 ms
95 We.all.will.be.together (82.133.91.33) 147.462 ms 147.589 ms 158.578 ms
96 If.the.Fates.allow (82.133.91.44) 145.665 ms 155.215 ms 149.548 ms
97 Hang.a.shining.star.upon.the.highest.bough (82.133.91.97) 156.918 ms 155.705 ms 163.390 ms
98 And.have.yourself.A.merry.little.Christmas.now (82.133.91.88) 155.173 ms 145.481 ms 153.135 ms
99 o.O (82.133.91.11) 156.867 ms 145.141 ms 156.870 ms
100 48.61.70.70.79.20.48.6f.6c.69.64.61.79.73.20.46.72.65.65.6e.6f.64.65.20.23.63.69.73.63.6f (82.133.91.51) 148.932 ms 155.109 ms 155.636 ms
101 oOoOoOoOoOoOoOoOoOoOoOo.MyssT.oOoOoOoOoOoOoOoOoOoOoOo (77.75.106.106) 155.663 ms 150.728 ms 150.342 ms
Sunday, December 28, 2014
rDNS to Burn Ho Ho Ho
Every year, like clockwork, someone makes sure to do something clever like this. Dont forget to set your max ttl to a little over 100 so you get ALL the lyrics. You don't want to miss out on a Christmas miracle.
Tuesday, December 23, 2014
Apache VirtualHost Proxy Configuration - Helpful for Tomcat, Node.js and similar frameworks
I recently came across this question on ServerFault:
So, I felt like I might be able to help the gentleman. Now, let's bear in mind his question: "I don't want to write port number everytime". He does not say, "I never want to write port number", just that it should be possible to render the page with a URL that does not append the port number. Ensuring that the port number is never written would require the solution below in addition to another step - there are a few ways to resolve it, but mod_rewrite would be the most obvious / most well known. Some of the other solutions might depend upon what version of Apache is being used, like an <if> directive for example.
In any event, here is what I provided to the questioner, which is quite similar to what I have implemented previously and serves my purposes quite well (notice I am using his somewhat strange hostname nomenclature):
" I've subsonic application running of [sic] tomcat. Everything else works on apache. I don't want to write port number everytime [sic] so I'd like to set-up [sic] a simple directory where subsonic will be accessible. So, I'm trying to make virtualhost file [sic] inside apache dir. [...] I tried many variations, but cannot make anything work. "The poor chap than [sic - ha!] provided an example of his latest go at the problem, an excerpt from his httpd.conf file:
<VirtualHost *:80>
DocumentRoot /var/www/streamer
ProxyPass / http://mini.local:4040/
ProxyPassReverse / http://mini.local:4040/
</VirtualHost>
Not sure a bad go of it, all thing considered. Still, it wasn't providing him with the sort of results he was looking for. Naturally I had encountered similar issues not long ago myself, with the implementation of a Ghost blogging software platform, which runs on node.js. Its been my first serious effort with node, other than some occasionally one-off one-page type of scripts for existing web sites.So, I felt like I might be able to help the gentleman. Now, let's bear in mind his question: "I don't want to write port number everytime". He does not say, "I never want to write port number", just that it should be possible to render the page with a URL that does not append the port number. Ensuring that the port number is never written would require the solution below in addition to another step - there are a few ways to resolve it, but mod_rewrite would be the most obvious / most well known. Some of the other solutions might depend upon what version of Apache is being used, like an <if> directive for example.
In any event, here is what I provided to the questioner, which is quite similar to what I have implemented previously and serves my purposes quite well (notice I am using his somewhat strange hostname nomenclature):
<VirtualHost *:80>
ServerName mini.local
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
AddDefaultCharset Off
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://mini.local:4040/
ProxyPassReverse / http://mini.local:4040/
</VirtualHost>
Labels:
apache,
node,
node.js,
Proxy,
server fault,
tomcat,
virtualhost
Monday, December 22, 2014
University of Sydney Uses XTF to Index 60's Sci-Fi Comics
I grew up reading pulp science fiction. There was a time when I would never had admitted something like that in public. But times have changed. Computer programming is now a career instead of a bizarre waste of time that might get you arrested. People wear and fiddle with mobile computers; displaying them at coffee shops like peacock plumage. When I was a kid and told adults I liked computers they assured my parents it was a phase I would grow out of.
As bitter as I may be of the past, I was delighted to find that the University of Sydney Library had combined my youthful passion for computers and science fiction comics into one mammoth project of love. They digitized the Frontiers of Science, a comic strip which was a big deal in Australia in the early 60's by way of the Sydney Morning Herald.
But they did more than just scan the damn things. Any unpaid intern can do that. Instead, they relied on the eXtensible Text Framework (XTF) in order to provide contextual search capabilities for the comic strip. XTF is designed to allow you to work with heterogenous media sources, and index them based on disparate types of metadata. It comes in handy, if say you want to build a database of comics that have already been scanned, and say the knucklehead that did the scanning saved some images as BMPs, others as JPGs and the rest as PDFs. All of these images contain usable metadata, and XTF is clever enough to grab it all and re-index it into a consistent format.
As bitter as I may be of the past, I was delighted to find that the University of Sydney Library had combined my youthful passion for computers and science fiction comics into one mammoth project of love. They digitized the Frontiers of Science, a comic strip which was a big deal in Australia in the early 60's by way of the Sydney Morning Herald.
But they did more than just scan the damn things. Any unpaid intern can do that. Instead, they relied on the eXtensible Text Framework (XTF) in order to provide contextual search capabilities for the comic strip. XTF is designed to allow you to work with heterogenous media sources, and index them based on disparate types of metadata. It comes in handy, if say you want to build a database of comics that have already been scanned, and say the knucklehead that did the scanning saved some images as BMPs, others as JPGs and the rest as PDFs. All of these images contain usable metadata, and XTF is clever enough to grab it all and re-index it into a consistent format.
So bear this in mind for future projects, and go check out the Frontiers of Science at the University of Sydney Library's web site.
Saturday, December 20, 2014
Symlink Analyzation Function in C
Here is a simple function for performing symlink analysis using C. The readlink function gets the value of the symbolic link filename. The file name that the link points to is copied into buffer. This file name string is not null-terminated; readlink normally returns the number of characters copied. The size argument specifies the maximum number of characters to copy, usually the allocation size of buffer.
There is significant utility for such a function in a variety of contexts. Have fun with it!
h/t Tanguy
There is significant utility for such a function in a variety of contexts. Have fun with it!
h/t Tanguy
Windows 7 and Windows 8 Basics: Searching by File Size, Modification Date and Other File Properties
It was one of these days, not long ago, that I work up one day and realized that I had become an Old Man. Mine is the last generation that remembers a time prior to the internet. I remember using acoustic couplers. My first laptop, a Toshiba, had dual 5 1/2 inch floppy drives, but had no hard drive. I was so excited when I got my hands on that machine. It meant I could connect to networks using my acoustic coupler from a pay phone!
My ruminations on aging is at least somewhat related to the topic at hand. You see, among the memories rattling around my grey hair ensconced head are a few about searching Windows file systems for files of specific types. This sort of thing is very important, even just for every day normal computer usage.
When your computer starts running out of space, wouldn't it be nice to be able to find all of the really large files on that computer? Or perhaps you are looking for an important document you wrote - you can't remember the name of the file but you remember the week that you wrote it. Doing this in Windows XP is straight-forward, because the Windows XP search box (what Microsoft calls the "Search Companion") includes these more advanced functions, and accessing that search box is as simple as clicking the Start button and clicking Search from the resulting contextual menu. Such a search box typically looks similar to this:
As you can see selecting size and date modification are simple in this format. However, Microsoft, in their infinite wisdom, decided to abandon this simple and straight forward menu, replacing it with a single magnifying glass icon without any options whatsoever:
Without the simple and easy to use Search Companion, how are we supposed to look for files based on their properties instead of their name?
The answer, unfortunately for users only accustomed to graphical interfaces, is a series of command line arguments.
Here is a list of such the available search commands for Windows 7 and Windows 8, taken from the relevant Microsoft KB article:
In addition to these commands, users can also use a series of Boolean command line operators to further refine searches:
Although the commands themselves are non-intuitive, using them is straight-forward. Simply type the appropriate command into the Windows search box, either in the Start menu or in the top-right corner of a File Manager menu. Here is an example, where we have searched for all files larger than 100MB in size in the drive C:\
There are a variety of circumstances where Windows' search implementation will fail to meet a user's needs. First and foremost, the search function is resource intensive, inaccurate and slow. Compared to Linux's `grep`, `find` and `locate` commands, Windows Search is almost laughably bad, particularly when attempting to search for strings inside of files.
There are other tools available for Windows that vastly improve on the default Windows search function. My recommendation at this time is GrepWin built by Stefan Kiing, available for download at the Google Code site.
GrepWin allows users to search by simple strings, operators and terms like those we described above, providing faster more accurate responses than those available from Windows' default search. In addition to basic search functionality, GrepWin also accepts regular expressions as input. While cryptic, and with a steep initial learning curve, regular expressions are incredibly powerful and a fundamental part of modern computer programming. With regular expressions, you may find specific and complex patterns from large datasets efficiently and quickly. We will almost certainly explore regular expressions in depth with their own post (or perhaps series of posts).
Thats it for now on Windows-based searching. When we return to searching in the future, we will likely spend more time on searching databases, arrays and other data structures as well as providing more theoretical explanations for file system search.
My ruminations on aging is at least somewhat related to the topic at hand. You see, among the memories rattling around my grey hair ensconced head are a few about searching Windows file systems for files of specific types. This sort of thing is very important, even just for every day normal computer usage.
When your computer starts running out of space, wouldn't it be nice to be able to find all of the really large files on that computer? Or perhaps you are looking for an important document you wrote - you can't remember the name of the file but you remember the week that you wrote it. Doing this in Windows XP is straight-forward, because the Windows XP search box (what Microsoft calls the "Search Companion") includes these more advanced functions, and accessing that search box is as simple as clicking the Start button and clicking Search from the resulting contextual menu. Such a search box typically looks similar to this:
Ruff! |
Searching mad stupid. |
The answer, unfortunately for users only accustomed to graphical interfaces, is a series of command line arguments.
Here is a list of such the available search commands for Windows 7 and Windows 8, taken from the relevant Microsoft KB article:
Example search term | Use this to find |
---|---|
System.FileName:~<"notes" | Files whose names begin with "notes." The ~< means "begins with." |
System.FileName:="quarterly report" | Files named "quarterly report." The = means "matches exactly." |
System.FileName:~="pro" | Files whose names contain the word "pro" or the characters pro as part of another word (such as "process" or "procedure"). The ~= means "contains." |
System.Kind:<>picture | Files that aren't pictures. The <> means "is not." |
System.DateModified:05/25/2010 | Files that were modified on that date. You can also type "System.DateModified:2010" to find files changed at any time during that year. |
System.Author:~!"herb" | Files whose authors don't have "herb" in their name. The ~! means "doesn't contain." |
System.Keywords:"sunset" | Files that are tagged with the word sunset. |
System.Size:<1mb | Files that are less than 1 MB in size. |
System.Size:>1mb | Files that are more than 1 MB in size. |
In addition to these commands, users can also use a series of Boolean command line operators to further refine searches:
Operator | Example | Use this to |
---|---|---|
AND | tropical AND island | Find files that contain both of the words "tropical" and "island" (even if those words are in different places in the file). In the case of a simple text search, this gives the same results as typing "tropical island." |
NOT | tropical NOT island | Find files that contain the word "tropical," but not "island." |
OR | tropical OR island | Find files that contain either of the words "tropical" or "island." |
A search example in Windows 7 |
There are other tools available for Windows that vastly improve on the default Windows search function. My recommendation at this time is GrepWin built by Stefan Kiing, available for download at the Google Code site.
GrepWin allows users to search by simple strings, operators and terms like those we described above, providing faster more accurate responses than those available from Windows' default search. In addition to basic search functionality, GrepWin also accepts regular expressions as input. While cryptic, and with a steep initial learning curve, regular expressions are incredibly powerful and a fundamental part of modern computer programming. With regular expressions, you may find specific and complex patterns from large datasets efficiently and quickly. We will almost certainly explore regular expressions in depth with their own post (or perhaps series of posts).
Thats it for now on Windows-based searching. When we return to searching in the future, we will likely spend more time on searching databases, arrays and other data structures as well as providing more theoretical explanations for file system search.
Friday, December 19, 2014
Ghost Blog Platform - Automatic sitemap.xml File Creation
There seems to be some mis-communication making the rounds about the new node.js-based Ghost blogging platform's ability to automatically generate a sitemap.xml.
As of August of this year (2014), automatic sitemap.xml file generation is a core feature, not a plugin. The new documentation on Ghost's development roadmap on Trello makes this clear:
But, if your like me and usually check on this sort of thing by browsing through the forums, its easy to get the idea that sitemaps will be handled in some nebulous, future plugin:
I'm fairly new to working with node, and my forrays with Ghost are partly an excuse to learn a bit more. So far, I like what I see. Its true Ghost isnt as mature as say, Wordpress, but it also doesnt have the same target painted on its back (ever go through your server logs for bad requests for wp-admin.php?), and a lot of what is not there or not built well I can re-do myself.
Anyone else jumping on the node.js blogger bandwagon, either with Ghost or something else? Whats your experience been like so far?
As of August of this year (2014), automatic sitemap.xml file generation is a core feature, not a plugin. The new documentation on Ghost's development roadmap on Trello makes this clear:
But, if your like me and usually check on this sort of thing by browsing through the forums, its easy to get the idea that sitemaps will be handled in some nebulous, future plugin:
I'm fairly new to working with node, and my forrays with Ghost are partly an excuse to learn a bit more. So far, I like what I see. Its true Ghost isnt as mature as say, Wordpress, but it also doesnt have the same target painted on its back (ever go through your server logs for bad requests for wp-admin.php?), and a lot of what is not there or not built well I can re-do myself.
Anyone else jumping on the node.js blogger bandwagon, either with Ghost or something else? Whats your experience been like so far?
Thursday, December 18, 2014
Nah, I smoked a joint
Wednesday, December 17, 2014
GDB Hash Script in C
Here is a very simple implementation of a GDB hash algorithm as written in C. The script accepts a single contiguous string of any size as input via command line argument. You can easily use a file as input using a pipe or redirect.
For example:
or
Each character in the string is input as a non-contiguous block of integers, for simplicity in reviewing output.
Enjoy!
For example:
#./gdb_script < input_file.txt
or
#./gdb_script thisismystring
Each character in the string is input as a non-contiguous block of integers, for simplicity in reviewing output.
Enjoy!
Tuesday, December 16, 2014
Linked List Creation in C
Below is an example of how to implement a Linked List function using C. Note that the example below does not include a main function or preprocessor imperatives.
Thursday, December 11, 2014
Dynamically Assign Filenames within an Iterative Loop
This question came up on Stack Exchange; Using C, how can you write filenames dynamically using an iterative variable loop? Or in English, you want to write some stuff to a bunch of different files, and have those files named in a logical manner like file1.file, file2.file, file3.file, ... etc.
The answer is pretty simple, but a bit of a mind-bender when you are first getting started. I was also pretty astonished that all of the accepted "solutions" to the problem failed to compile. This is a pretty big issue on these programming solution websites.
I think the reason for the problem is there are two groups of people: There are the people who want other people to think they can fix stuff. Then there are the people who can fix stuff. The two groups, obviously, have some overlap: people who can fix stuff and who want others to know about it. But there is also a statistically significant portion of both groups that do not overlap - people who can fix stuff who don't care whether other people know about it, and people who cant fix stuff, but who want other people to think they can. This last group seems to have a somewhat outsized voice on internet forums. By all means, leave a comment sharing your thoughts on the answer (I will delete it, but go ahead and leave your comment).
So, I contributed my own answer. This answer has the benefit of actually compiling and resolving the problem, which the other "solutions" do not do. Here it is, if you want to spare yourself the click-through:
The answer is pretty simple, but a bit of a mind-bender when you are first getting started. I was also pretty astonished that all of the accepted "solutions" to the problem failed to compile. This is a pretty big issue on these programming solution websites.
I think the reason for the problem is there are two groups of people: There are the people who want other people to think they can fix stuff. Then there are the people who can fix stuff. The two groups, obviously, have some overlap: people who can fix stuff and who want others to know about it. But there is also a statistically significant portion of both groups that do not overlap - people who can fix stuff who don't care whether other people know about it, and people who cant fix stuff, but who want other people to think they can. This last group seems to have a somewhat outsized voice on internet forums. By all means, leave a comment sharing your thoughts on the answer (I will delete it, but go ahead and leave your comment).
So, I contributed my own answer. This answer has the benefit of actually compiling and resolving the problem, which the other "solutions" do not do. Here it is, if you want to spare yourself the click-through:
#include <stdio.h>
#include <string.h>
int main(void)
{
for (int k = 0; k < 51; k++)
{
char title[8];
sprintf(title, "%d.jpg", k);
FILE* img = fopen(title, "a");
char* data = "Write this down";
fwrite (data, 1, strlen(data) , img);
fclose(img);
}
}
Tuesday, December 9, 2014
Pointer Fun With Binky From Stanford Computer Lab
Stanford created a strange, claymation character named Binky to teach students about the concept of Pointers in programming. Watch the video and share in the severe WTF-ness that comes along with it.
Barack Obama on Sorting Algorithms
Watch a pre-Presidency Obama chat with Eric Schmidt of Google about sorting algorithms. Oh, how you've changed, Barry.
The Sounds of Sort
The video below provides a graphical display of a number of different sorting algorithms at work. Whats unique about this video, is that in addition to the graphical display, each operation in the sorting algorithm has also been assigned a tone - in other words, this video lets you hear the algorithms as well as see them! While counter-intuitive, the "song" that the tones play can provide insight into how the algorithm is working; at least for me, it is easier to distinguish small differences using sound than it is to do so using the often frenetic, fast moving images in sorting visualizations.
Check it out for yourself!
Saturday, December 6, 2014
Asymptotic or "Big O" Notation for Understanding Algorithm Performance
What does it mean for an algorithm to be "fast" or "efficient"? When we say an algorithm is fast, we are not referring to an explicitly notation in time, like seconds or minutes. There is a distinction here because of the distinction between hardware and software - performance of a program in real time can be drastically impacted by the hardware it is running on.
In order to account for this distinction, while still providing an objective measurement for the efficiency of a program, computer scientists rely on Asymptotic Measurement and a notation format called Big O Notation to describe such measurements. The formal description as a formula is as follows:
f(x) ∈ g(x)
xo C
f(x) ≤ C • g(x)
x > xo
In less theoretical terms; as the size of your input increases toward infinity, how does the runtime of your program grow?
For example, imagine counting the number of characters in a string in the most simple way, by reading each character and summing the related values. Such an algorithm is set to run in linear time, compared to the number of characters "n" in the string. This can be notated by saying the runtime is O(n) (or Big O of N).
Using this approach, the time to traverse the entire string is directly proportional to the number of characters (it is linear). Counting the characters in a 20 char string will take twice as long as counting the number of characters in a 10 char string because you must look at all of the characters and each character takes the same amount of time to review.
What if linear time, O(n), is not fast enough for your task? O(n) is insufficient when dealing with large datasets.
Suppose we decide to store the number of characters in the string in a variable "len" pre-determined earlier in your program; before the string was enumerated. In such a scenario, to determine string length you can merely print the variable value. Such an operation - checking a predetermined variable - is referred to as an Asymptotically Constant Operation or O(1) (Big O of 1). Why is this label appropriate? Remember what aymptotical means in this context - how does the runtime of your program grow as the input grows? With this type of operation, the runtime remains unchanged by the size of the input. It is constant, regardless of input. If your string is one char or a million chars, the operation would run in the same time. There is, however, a drawback to this method - you must reserve memory to store the variable, and there is performance overhead to actually storing the variable as well.
There are a variety of common types of Big O notated algorithms. O(n^2) algorithms, for example, can be slower than linear O(n) algorithms. O(n^2) algorithms are not *always* slower, but they become slower as their input grows; and they become slower at an exponential rate. For large datasets, O(n^2) are inappropriate. For smaller datssets, however, they can be appropriate or even run faster than O(n) algorithms.
Another common type is O(log n). The classic "binary search" algorithm for presorted datasets is one such algorithm. Doubling the size of the array increases a binary search time by only one operation. Binary search and other logarithmic algorithms are incredibly powerful when working with large datasets.
There is a pseudo-random performance modifier to such algorithms. With binary search, for example, there is a statistically significant possibility that the desired result of a binary search will be in the absolute center of the array. In cases such as this, the binary search algorithm completes in one operation, no matter what the size of the array is. Because of these probabilistic outcomes, Asymptotic notations also includes expressions for "best case" and "worst case" outcomes, or the upper and lower bounds of run time.
Again using binary search as our example, the best case scenario is as previously mentioned right in the middle of the array to be searched. The best case scenario is always one operation, which also means that the best case scenario is constant, because the result is always achieved in one operation in the best case scenario, regardless of the input size. Best case scenarios are noted using Ω, so the binary search algorithm is said to run with Ω(1).
In the worst case scenario, the binary search is O(log n). Worst case scenarios are expressed simply using the Big O notation already covered.
Using a linear search example, we see that linear algorithms also have Ω(1), because the best case scenario is the item searched for is the first item of the array. The worst case would be the search item being the last item of the array, expressed as O(n).
One last notation we will cover is Θ. Θ is used to express algorithms where the best and worst case scenarios are the same. One example where Θ is appropriate is the variable example we used earlier; where the run time will always remain the same. With the variable example, Ω(1) and O(1), so Θ(1).
In order to account for this distinction, while still providing an objective measurement for the efficiency of a program, computer scientists rely on Asymptotic Measurement and a notation format called Big O Notation to describe such measurements. The formal description as a formula is as follows:
f(x) ∈ g(x)
xo C
f(x) ≤ C • g(x)
x > xo
In less theoretical terms; as the size of your input increases toward infinity, how does the runtime of your program grow?
For example, imagine counting the number of characters in a string in the most simple way, by reading each character and summing the related values. Such an algorithm is set to run in linear time, compared to the number of characters "n" in the string. This can be notated by saying the runtime is O(n) (or Big O of N).
Using this approach, the time to traverse the entire string is directly proportional to the number of characters (it is linear). Counting the characters in a 20 char string will take twice as long as counting the number of characters in a 10 char string because you must look at all of the characters and each character takes the same amount of time to review.
What if linear time, O(n), is not fast enough for your task? O(n) is insufficient when dealing with large datasets.
Suppose we decide to store the number of characters in the string in a variable "len" pre-determined earlier in your program; before the string was enumerated. In such a scenario, to determine string length you can merely print the variable value. Such an operation - checking a predetermined variable - is referred to as an Asymptotically Constant Operation or O(1) (Big O of 1). Why is this label appropriate? Remember what aymptotical means in this context - how does the runtime of your program grow as the input grows? With this type of operation, the runtime remains unchanged by the size of the input. It is constant, regardless of input. If your string is one char or a million chars, the operation would run in the same time. There is, however, a drawback to this method - you must reserve memory to store the variable, and there is performance overhead to actually storing the variable as well.
There are a variety of common types of Big O notated algorithms. O(n^2) algorithms, for example, can be slower than linear O(n) algorithms. O(n^2) algorithms are not *always* slower, but they become slower as their input grows; and they become slower at an exponential rate. For large datasets, O(n^2) are inappropriate. For smaller datssets, however, they can be appropriate or even run faster than O(n) algorithms.
Another common type is O(log n). The classic "binary search" algorithm for presorted datasets is one such algorithm. Doubling the size of the array increases a binary search time by only one operation. Binary search and other logarithmic algorithms are incredibly powerful when working with large datasets.
There is a pseudo-random performance modifier to such algorithms. With binary search, for example, there is a statistically significant possibility that the desired result of a binary search will be in the absolute center of the array. In cases such as this, the binary search algorithm completes in one operation, no matter what the size of the array is. Because of these probabilistic outcomes, Asymptotic notations also includes expressions for "best case" and "worst case" outcomes, or the upper and lower bounds of run time.
Again using binary search as our example, the best case scenario is as previously mentioned right in the middle of the array to be searched. The best case scenario is always one operation, which also means that the best case scenario is constant, because the result is always achieved in one operation in the best case scenario, regardless of the input size. Best case scenarios are noted using Ω, so the binary search algorithm is said to run with Ω(1).
In the worst case scenario, the binary search is O(log n). Worst case scenarios are expressed simply using the Big O notation already covered.
Using a linear search example, we see that linear algorithms also have Ω(1), because the best case scenario is the item searched for is the first item of the array. The worst case would be the search item being the last item of the array, expressed as O(n).
One last notation we will cover is Θ. Θ is used to express algorithms where the best and worst case scenarios are the same. One example where Θ is appropriate is the variable example we used earlier; where the run time will always remain the same. With the variable example, Ω(1) and O(1), so Θ(1).
Friday, December 5, 2014
Apache Log Pong
Looking for an apache log visualization program recently, I came across logstalgia. It turns your log files into a game of Pong between your web server and the internet, with each new request being the ball going back and forth!
So cool.
So cool.
Eric Garner's Last Words
Every time you see me, you want to mess with me. I'm tired of it. It stops today. Why would you...? Everyone standing here will tell you I didn't do nothing. I did not sell nothing. Because everytime you see me, you want to harass me. You want to stop me [garbled] Selling cigarettes. I'm minding my business, officer, I'm minding my business. Please just leave me alone. I told you the last time, please just leave me alone. please please, don't touch me. Do not touch me. [garbled] I can't breathe. I can't breathe. I can't breathe. I can't breathe. I can't breathe. I can't breathe. I can't breathe. I can't breathe. I can't breathe.
Tuesday, December 2, 2014
WonkaLeaks
Paid FBI informant, accused sex criminal and convicted embezzler Sigurdur Thordarson, who was paid thousands of dollars by his DOJ handler for 8 hard drives filled with internal Wikileaks documents, bears an uncanny resemblance to Augustus Gloop from Willy Wonka. Coincidence, or something more sinister?
Subscribe to:
Posts (Atom)