<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts tagged "programming" - nolan caudill&#39;s internet house</title>
    <link>https://nolancaudill.com/tags/programming/</link>
    <description>Posts tagged "programming" on nolan caudill&#39;s internet house</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sun, 30 Sep 2012 07:00:00 +0000</lastBuildDate>
    <atom:link href="https://nolancaudill.com/tags/programming/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Twitter to Flickr, and Back Again</title>
      <link>https://nolancaudill.com/2012/09/30/twitter-flickr/</link>
      <pubDate>Sun, 30 Sep 2012 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2012/09/30/twitter-flickr/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/8037930464/&#34; title=&#34;View from the front porch by Nolan Caudill, on Flickr&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/flickr/8037930464_72dee06eac_z.jpg&#34; alt=&#34;View from the front porch&#34;&gt;&lt;/a&gt;
I just got back from a week in Hawaii and only took my film camera. I shot 5 or 6 rolls of film and I look forward to getting them up on &lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/&#34;&gt;Flickr&lt;/a&gt; but there are many steps between those 35mm canisters still sitting in the bag I need to unpack and someone being able to click a link to look at them. (Unless everyone wants to come over to my house, which is also fine with me.) I think this is worth the wait but it does remove a bit of the &lt;em&gt;instantaneousness&lt;/em&gt; that services like Flickr and Twitter offer. It&amp;rsquo;s fun sitting on a palm-covered beach or enjoying a tropical drink on a warm patio with a slow-moving fan, taking a picture, and send a modern-day, wish-you-were-here postcard to a few friends.
Today, the snapshot app of choice among my friends appears to be Instagram. This is perfectly fine and I use it a bit, but I&amp;rsquo;m a &lt;a href=&#34;http://www.imdb.com/title/tt0190590/quotes?qt=qt0404012&#34;&gt;Flickr man&lt;/a&gt; and I&amp;rsquo;d rather use that, especially since the rest of my Hawaii photos will go there. It&amp;rsquo;s nice to make a big set of all of the vacation photos, and be able to email the link off to Mom and Pop, and even nicer to be able to see them again together in 1 year, 5 years, 25 years time. As far as Flickr goes, I feel pretty good about their &lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/archives/&#34;&gt;thoughts about longevity&lt;/a&gt;.
The postcard delivery system, to extend (and strain) the metaphor, is Twitter. Twitter is probably the best spot to put things where people will see them sooner than later. Instagram comes equipped with its own social network but Twitter is the common stomping ground of me and my friends and acquaintances.
&lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/6900870098/&#34; title=&#34;My Twitter social graph visualized by Recollect by Nolan Caudill, on Flickr&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/flickr/6900870098_00b9450ac5_z.jpg&#34; alt=&#34;My Twitter social graph visualized by Recollect&#34;&gt;&lt;/a&gt;
&lt;em&gt;&lt;a href=&#34;http://thatsaspicymeatball.com/&#34;&gt;Bert&lt;/a&gt; and &lt;a href=&#34;http://roundhere.net/&#34;&gt;Chris&lt;/a&gt; built a thing for &lt;a href=&#34;http://recollect.com&#34;&gt;Recollect&lt;/a&gt; mapping your Twitter social clusters. Also, Meghan told me to put more pictures in my blog posts.&lt;/em&gt;
The crux of the problem was how to get my photos directly to Twitter and Flickr without building a Rube Goldberg device, because things with fewer moving parts break less and are easier for me to understand.
Going through a multi-step process, especially when I&amp;rsquo;m on the go (it&amp;rsquo;s mobile!) and when I&amp;rsquo;m trying to enjoy my surroundings (it&amp;rsquo;s social!), sounds horrible. I want one app and to be able to hit one button. Flickr does have a mobile app, which is serviceable, but I usually already have Twitter up and most Twitter clients have this nice ability to take pictures within the app. With my phone, I&amp;rsquo;m usually sending a tweet with a photo attached, and not a Flickr photo that I also want to share on Twitter. Twitter to me is the Instant, which is usually what I want when on the go.
Twitter is, in its fundamental glory, a &lt;a href=&#34;http://laughingmeme.org/2012/09/12/app-net-and-cargo-culting/&#34;&gt;magic word distribution system&lt;/a&gt; (via &lt;a href=&#34;http://laughingmeme.org&#34;&gt;Kellan&lt;/a&gt;, via &lt;a href=&#34;http://aaronland.info/&#34;&gt;Aaron&lt;/a&gt;). Most Twitter clients allow you do media-webby things like upload a video or a picture to a service of your choosing, get a link back in return, and then helpfully include that link for you in the tweet. This outsourced-upload thing uses what is formally called OAuth Echo. This is described &lt;a href=&#34;https://dev.twitter.com/docs/auth/oauth/oauth-echo&#34;&gt;here&lt;/a&gt; and seems to have originally been thought of by &lt;a href=&#34;http://mehack.com/oauth-echo-delegation-in-identity-verificatio&#34;&gt;Raffi Krikorian&lt;/a&gt; of Twitter.
&lt;a href=&#34;http://www.flickr.com/photos/kellan/3120765898/&#34; title=&#34;Magic word distribution system by kellan, on Flickr&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/flickr/3120765898_c8f4929da0_z.jpg&#34; alt=&#34;Magic word distribution system&#34;&gt;&lt;/a&gt;
Flickr is not one of the upload options, but things like cloudapp, droplr, pikchur, twitgoo are (at least in &lt;a href=&#34;http://tapbots.com/software/tweetbot/&#34;&gt;Tweetbot&lt;/a&gt;). I&amp;rsquo;ll take most of the blame for Flickr not being included as it was something that I was working on in my side time towards the end of my tenure, but didn&amp;rsquo;t finish before I left. One service does handle this handshake of Twitter-to-Flickr, &lt;a href=&#34;http://gdzl.la/&#34;&gt;gdzl.la&lt;/a&gt;, but it returns the link back as a gdzl.la link, effectively introducing one more URL forwarder into the world (which is a shitty thing to do if you can help it).
Twitter for iPhone used to support different image backends but probably took it out shortly after they built their own image upload thing. So there&amp;rsquo;s that.
So, after a week of wanting to take pictures with my phone and send them along to Twitter, and having to choose between Instagram (look at all those filters!) and Twitter&amp;rsquo;s Official Image Backend™ (store up to 3200 pictures!), I decided to build my own Twitter-to-Flickr uploader atop Aaron&amp;rsquo;s &lt;a href=&#34;http://straup.github.com/parallel-flickr/&#34;&gt;parallel-flickr&lt;/a&gt; of which I also run an instance of for myself.
In theory, this OAuth Echo upload stuff could live by itself (see gdzl.la) and there&amp;rsquo;s no reason that I couldn&amp;rsquo;t return my parallel-flickr&amp;rsquo;s instance URL but there&amp;rsquo;s something nice about saying &amp;ldquo;here&amp;rsquo;s my Flickr kit&amp;rdquo;, playing along with the aforementioned idea of fewer moving parts as well as knowing all the archival bits-and-pieces going on. Using &lt;a href=&#34;https://github.com/exflickr/flamework&#34;&gt;flamework&lt;/a&gt; and the pieces of p-flickr that were already there, a few cups of coffee and a chunk of quiet time, I was able to bolt it on.
One important thing that made this possible is that these 3rd party clients, knowing that they are building things that the official client won&amp;rsquo;t or can&amp;rsquo;t build WANT you to build more things to fill the gaps. Tweetbot, at the end of the list of the dozen or so included image backends, has a field marked &amp;ldquo;Custom&amp;hellip;&amp;rdquo; which takes a field to put in an URL endpoint that knows the steps to the OAuth Echo dance. This kind of allowance and permission is refreshing as things become &lt;a href=&#34;https://dev.twitter.com/blog/changes-coming-to-twitter-api&#34;&gt;increasingly less so&lt;/a&gt;.
&lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/8041787454/&#34; title=&#34;Love it when things say &#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/flickr/8041787454_5f022f2a58_n.jpg&#34; alt=&#34;Love it when things say &#34;&gt;&lt;/a&gt;
&lt;em&gt;No, YOU drive.&lt;/em&gt;
It looks like Aaron merged this change and the upload branch (which made my part really easy) this morning so if you&amp;rsquo;re running parallel-flickr, feel free to kick the tires on it, and if not, look at the code and see how easy it is do.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>A natural progression...</title>
      <link>https://nolancaudill.com/2012/03/05/23/</link>
      <pubDate>Mon, 05 Mar 2012 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2012/03/05/23/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Yep, the import is slow. I know and I haven&amp;rsquo;t done much optimizing in the code. The Route I&amp;rsquo;m going is&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;No Tool&lt;/li&gt;
&lt;li&gt;A Tool&lt;/li&gt;
&lt;li&gt;A fast Tool&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;And I&amp;rsquo;m currently working on 2.&lt;/p&gt;
&lt;p&gt;&amp;ndash; From &lt;a href=&#34;https://github.com/MaZderMind/osm-history-renderer#readme&#34;&gt;https://github.com/MaZderMind/osm-history-renderer#readme&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;A spin on the &amp;ldquo;&lt;a href=&#34;http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast&#34;&gt;Make it work. Make it right. Make it fast&amp;rdquo;&lt;/a&gt; adage but I enjoyed seeing it in the wild. I feel like I&amp;rsquo;m almost always working on step 2.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>The Code Behind the Yearbook</title>
      <link>https://nolancaudill.com/2012/01/23/yearbook/</link>
      <pubDate>Mon, 23 Jan 2012 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2012/01/23/yearbook/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is lifted from the README to &lt;a href=&#34;https://github.com/mncaudill/yearbook&#34;&gt;the github repository&lt;/a&gt;.&lt;/em&gt;
&lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/6752628359/&#34; title=&#34;The Yearbook by Nolan Caudill, on Flickr&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/flickr/6752628359_d764b003d8.jpg&#34; alt=&#34;The Yearbook&#34;&gt;&lt;/a&gt;
I decided to take all the blog posts, Twitter messages, and Flickr images I made this year, combine them, typeset them, and then get it printed
in a hard-bound book. I wrote a bit about the reasoning &lt;a href=&#34;http://nolancaudill.com/2011/11/29/concrete-words/&#34;&gt;here&lt;/a&gt;.
There was a lot of poking and pawing at the scripts I used to create the final product so I thought I&amp;rsquo;d share them in case someone else could get some use out of them.
Big warning: these are mostly worthless until you change them to fit your project. While all the code here works, and it ended up giving me a decent-looking book, you&amp;rsquo;ll need to modify it, which is mostly the point. This is &lt;em&gt;your&lt;/em&gt; retrospective and thus shouldn&amp;rsquo;t be a cookie-cutter running of the code I wrote (if that would even work).
I&amp;rsquo;ll now explain a bit about the pieces:&lt;/p&gt;
&lt;h2 id=&#34;the-blog-posts&#34;&gt;The Blog Posts&lt;/h2&gt;
&lt;p&gt;All my blog posts are just flat HTML (via jekyll) so getting my blog onto my PC was already done. You&amp;rsquo;ll probably need to run some magic incantation of &lt;code&gt;wget&lt;/code&gt; or &lt;code&gt;curl&lt;/code&gt; to get yours if they&amp;rsquo;re hosted somewhere else.
TeX, specifically pdflatex, was the workhorse on typesetting it so I needed to get these HTML files into tex format. I ran a &lt;code&gt;find . -name &amp;quot;*html&amp;quot; | xargs -I{} python texify.py {}&lt;/code&gt; in my jekyll&amp;rsquo;s site directory which then ran each of the files through &lt;a href=&#34;http://johnmacfarlane.net/pandoc/&#34;&gt;pandoc&lt;/a&gt;. Pandoc is a super magic text transformation library that will slurp in most text format and then spit out a transformed version. In this case, I was reading HTML and spitting out .tex files. You can see the command in &lt;code&gt;texify.py&lt;/code&gt;.
After I had all these converted tex files, I actually loaded all my files up in vim, made a macro that cleaned out things like header and footer, and then just ran the macro across all the open files. I forgot this magic spell almost as soon as I did it, but &lt;code&gt;bufdo&lt;/code&gt; sounds familiar. I&amp;rsquo;d google something like &amp;ldquo;vim macro across all open buffers&amp;rdquo; or something.
Now that I have a directory full of tex files, one file per blog post, you need a master tex file that actually describes the full document, as well as the pointers to all the various tex files to include. This is the &lt;code&gt;book.tex&lt;/code&gt; file in this repository. This is mine lifted as-is, so this is what the finished result looks like and should give you a good idea of how to put yours together.
TeX is a frustratingly arcane markup language, but it is extremely powerful and can create beautiful documents. It&amp;rsquo;s worth it, trust me.
I&amp;rsquo;ve also included a sample blog post tex file. This post includes a couple of images by &lt;code&gt;includegraphics&lt;/code&gt; to give you a heads start on that.&lt;/p&gt;
&lt;h2 id=&#34;twitter&#34;&gt;Twitter&lt;/h2&gt;
&lt;p&gt;To format your Twitter posts, you first need the actual Twitter messages. This is actually hard, if not impossible, if you&amp;rsquo;re especially prolific.
Twitter famously only allows you to fetch your last 3200 messages. This limit is enforced but on the official website and by the API.
I&amp;rsquo;ve been running &lt;a href=&#34;http://pongsocket.com/tweetnest/&#34;&gt;tweetnest&lt;/a&gt; on my server for a year or so, mainly because I think it&amp;rsquo;s pretty, but it turned out to do a whizbang job of archiving as well. Surprise, surprise: this was the source of Twitter messages for my book. I just dumped the table to a text file (via &lt;code&gt;mysqldump&lt;/code&gt;) and used that as my source file.
Inside of &lt;code&gt;twitter/tweet_transform.php&lt;/code&gt;, you&amp;rsquo;ll see the reading of this file and then spitting out the tex file, separating the messages by month and then by the day.
There are some positively Nolan-specific things in here. All the dates in Tweetnest (and probably Twitter&amp;rsquo;s API) return a timestamp for each Tweet using seconds since the epoch. If I only tweeted from San Francisco in all of 2011, getting nice dates would have been easy: just set the timezone at the top of the script and then call it a day. But as it turned out, I climbed on and off airplanes at various locations and at different times. You&amp;rsquo;ll see a block of code that dynamically sets the timezone according to when I was boarding and de-boarding airplanes.
Another sort of fuzzy, human thing I added to this that you may want to be aware of is that I fudged the edges of what constituted a &amp;ldquo;day&amp;rdquo;. Instead of a day being midnight to midnight, I grouped tweets on a 4am boundary. Best I could tell, I never tweeted before 4am after waking up, and never tweeted past 4am by staying up from the night before. This way a day is defined as waking up to going asleep (or passing out, some nights).
This script also runs follows some common URL shorteners so you won&amp;rsquo;t see any bit.ly or goo.gl links in your permanent archive.
The hard part of getting the Twitter section together is actually getting the tweets together, but once you do that, it&amp;rsquo;s a breeze.&lt;/p&gt;
&lt;h2 id=&#34;flickr&#34;&gt;Flickr&lt;/h2&gt;
&lt;p&gt;I uploaded about 600 pictures to Flickr this year. I really wanted to display every single picture for the sake of completeness but figuring out a way to that visually was difficult.
I ended up going something like &lt;a href=&#34;http://images.google.com/search?q=kitten&amp;amp;hl=en&amp;amp;site=webhp&amp;amp;tbm=isch&#34;&gt;Google&amp;rsquo;s image search&lt;/a&gt;. &lt;a href=&#34;http://www.flickr.com/photos/protohiro&#34;&gt;Stephen Woods&lt;/a&gt; was also a major source of inspiration for the layout. This layout lets you plop a lot of images on a page and letting them use their natural dimensions to shoulder out more space as needed.
Instead of forcing tex to layout individual images, or individual rows, I figured it would be easier to create an image that represented the full page and then put that on the page, not unlike the old days of people adding &lt;code&gt;&amp;lt;area&amp;gt;&lt;/code&gt; tags to full-page images in the early days of the web.
The &lt;code&gt;flickr/justified.php&lt;/code&gt; file is what creates these image files and then the &lt;code&gt;flickr.tex&lt;/code&gt; file that includes them all.
I used Aaron Cope&amp;rsquo;s &lt;a href=&#34;http://straup.github.com/parallel-flickr/&#34;&gt;parallel-flickr&lt;/a&gt; as the source of the images. This project conveniently creates an easy-to-query database so I could do something like &amp;ldquo;give me all the images from Jan 1, 2011, to Dec 31, 2011 ordered by date_taken ascending&amp;rdquo;. I used the output of this query to select the appropriate images in the correct order and rsynced them to my book&amp;rsquo;s Flickr directory.
There are a few fuzzy parameters that lets you set things like a maximum row height, and how wide your rows are. Feel free to twiddle these knobs as you see fit.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Nothing about this is drop-in-and-run but there are a lot of gotchas that I came across that might help someone else if they ever decide to tackle a project like this.&lt;/p&gt;
&lt;h2 id=&#34;the-code&#34;&gt;The Code&lt;/h2&gt;
&lt;p&gt;Feel free to browse the code at &lt;a href=&#34;https://github.com/mncaudill/yearbook&#34;&gt;my github repository&lt;/a&gt;.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>How to Build a Quine</title>
      <link>https://nolancaudill.com/2011/01/01/how-to-build-a-quine/</link>
      <pubDate>Sat, 01 Jan 2011 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2011/01/01/how-to-build-a-quine/</guid>
      <description>&lt;p&gt;A quine is simply a program that prints itself. This is my explanation of how they are built.&lt;/p&gt;
&lt;p&gt;Writing a quine seems like a chicken-and-egg problem but if you really enforce the physical separation between data and code&amp;ndash;that is defining the string to print and the code that prints it out&amp;ndash;it&amp;rsquo;s a straight-forward programming exercise.&lt;/p&gt;
&lt;p&gt;I divided the quine program into three parts: the data, the code that prints the first part of the program and the data, and the code that uses the data to print the entirety of the code (this section and the former). The data that I mentioned is a just a representation of the code part of the program. After breaking the program into these three parts, it becomes clear that the two code parts (the print-it-out part and the turn-it-into-code part) are two simple loops that can live completely independent of the data. The key to the trick is to write the code that uses the data and then turn that code into data after the fact, thus giving the program knowledge of how to print itself.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the full paste of  the quine which should make it easier to follow along:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gist.github.com/749531.js&#34;&gt;https://gist.github.com/749531.js&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I found it easier to store the code inside the data part of the program as the decimal values of the code&amp;rsquo;s ASCII values. This representation avoided unfun issues such as getting the quoting and escaping right. So I wrote the whole program and left the data variable blank since this would be built from the code that yet to be written!&lt;/p&gt;
&lt;p&gt;The first part of the code focused on printing the PHP boilerplate, the newlines down to the first variable, and the variable declaration itself. I then looped through the data array of integers and printed them. I then printed out the closing characters of the variable declaration.&lt;/p&gt;
&lt;p&gt;The second part of the code turned the data array of ASCII values into their associated characters and printed them. Once again, this was just a simple loop that went through each ASCII value, converted it to its character representation, and then printed out the character. To me, this is the interesting part as the code is actually printing out itself.&lt;/p&gt;
&lt;p&gt;To actually build the data array, I wrote a second program that took a block of text and printed out a comma-delimited string of the ASCII values. I copied and pasted the code portion of my quine, which was everything below the data array, and ran it through this second program, taking its output and placing it into the quine program as the data variable.&lt;/p&gt;
&lt;p&gt;Now I had the full quine done. I had the data array, which represented the actual code of the quine: one code part that printed out the PHP boilerplate and the actual data array and another code part that looped through the data array and printed its ASCII representation which turns out to be the remaining code.&lt;/p&gt;
&lt;p&gt;Running the program and then diffing the output with the original program resulted in nothing. Identical programs, success.&lt;/p&gt;
&lt;p&gt;After I finished this, I wanted to do the trick where you have the first program print a second program in another language that prints out a program in a third language and so on that finally prints out the original program. After writing the first quine and seeing how it worked, this turned out to be a simple exercise in quoting and escaping among the various programming languages.&lt;/p&gt;
&lt;p&gt;The key to the polyglot quine is that only the very last program (i.e., the one that prints out the original program) is the one that does any real work. All of the other ones are programs that just say &amp;ldquo;print this string&amp;rdquo; where the string is the next program. I modified my PHP quine, translating the code portions into JavaScript, and having the PHP program simply print the JavaScript. This gave me a PHP program that print out a JavaScript program that would then print out the original PHP program.&lt;/p&gt;
&lt;p&gt;To add more languages, I would take the existing print statement, wrap the string in the next language&amp;rsquo;s print function, recalculate my data array and then move to the next language. After doing it once, I realized that there wasn&amp;rsquo;t anything technically difficult to the polyglot quines and felt somewhat embarrassed as passing it off as something impressive. I imagine it&amp;rsquo;s a lot like a magician&amp;rsquo;s act: there&amp;rsquo;s a lot of fireworks and pretty ladies and the tiger is wearing a top hat but the trick is that there is a mirror brought in at the end that makes the actual &amp;ldquo;magic&amp;rdquo; happen and the rest is just for show. I added a few common languages, figured that was enough garnish, and stopped at a PHP-&amp;gt;C-&amp;gt;Python-&amp;gt;JS-&amp;gt;PHP program.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gist.github.com/749686.js&#34;&gt;https://gist.github.com/749686.js&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Reading how a quine is built is one thing but actually working through the problem and seeing that code and data which we usually keep separate can be melded into the same thing is really a revelatory experience. It may be a bit like a learning how a magic trick is done but the deeper, Lisp-like knowledge that you get from actually seeing that &amp;ldquo;code is data and data is code&amp;rdquo; is well worth it.&lt;/p&gt;
&lt;p&gt;Further reading: &lt;a href=&#34;http://www.madore.org/~david/computers/quine.html&#34;&gt;http://www.madore.org/~david/computers/quine.html&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    <item>
      <title>How many fume cupboards are needed?</title>
      <link>https://nolancaudill.com/2009/06/08/how-many-fume-cupboards-are-needed/</link>
      <pubDate>Mon, 08 Jun 2009 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2009/06/08/how-many-fume-cupboards-are-needed/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve recently been brushing up on my statistics by reading Principles of Statistics by M.G. Bulmer. I came across a problem and since I wrote some code to check my answer, I figured I&amp;rsquo;d post it with a short discussion about the answer.&lt;/p&gt;
&lt;p&gt;First, the question:&lt;/p&gt;
&lt;p&gt;In a certain survey of the work of chemical research workers, it was found, on the basis of extensive data, that on average each man required no fume cupboard for 60 per cent of his time, one cupboard for 30 per cent and two cupboards for 10 per cent; three or more were never required. If a group of four chemists worked independently  of one another, how many fume cupboards should be availabe in order to provode adequate facilities for at least 95 per cent of the time?&lt;/p&gt;
&lt;p&gt;My line of thinking to solve this was to find every combination of the 4 chemists needing 0, 1, or 2 cupboards, the probability of each of those combinations happening, and finally summing up the probability of all the hoods needed.&lt;/p&gt;
&lt;p&gt;For example, out of the 81 different possible combinations of cupboards required (3 * 3 * 3* 3, with the 3 coming from 0, 1, or 2 hoods needed), there is only one way where 0 hoods are needed in total and this is where all 4 chemists need no cupboards. Following this, there are 4 ways to have 1 hood total be required, with each of the chemists exclusively requiring a cupboard and the other three needing none (1, 0, 0, 0 &amp;amp; 0, 1, 0, 0 &amp;amp; 0, 0, 1, 0 &amp;amp; 0, 0, 0, 1).&lt;/p&gt;
&lt;p&gt;So having one cupboard covers the probability of needing no cupbards amongst the 4 PLUS the probability of needing 1 cupboard amongst the 4.&lt;/p&gt;
&lt;p&gt;I first did this problem long-handed, figuring out the probability of 0, 1, 2, and so on cupboards until I got to a sum that had a probability &amp;gt; 0.95. I was making a simple arithmetic error (as usual) and my answer was not matching up with what was in the back of the book, so I thought I would write a simple program to calculate the answer since I was confident in what I was trying to do, but was just having trouble multiplying and adding.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the Python script I wrote (note: you need Python &amp;gt;= 2.6 as I use itertools.product to recreate all the combinations of cupboards needed).&lt;/p&gt;
&lt;p&gt;The output is the summing of each of the probability of needing 0 cupboards + 1 cupboard + 2 cupboards and so on. The line with the probablity greater than 0.95 is the answer. In this the case, the answer was 4, which would cover the chemists&amp;rsquo; needs 95.85 percent of the time.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-py&#34; data-lang=&#34;py&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f92672&#34;&gt;from&lt;/span&gt; collections &lt;span style=&#34;color:#f92672&#34;&gt;import&lt;/span&gt; defaultdict
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f92672&#34;&gt;import&lt;/span&gt; itertools
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;probs &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#39;0&amp;#39;&lt;/span&gt;: &lt;span style=&#34;color:#ae81ff&#34;&gt;0.6&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#39;1&amp;#39;&lt;/span&gt;: &lt;span style=&#34;color:#ae81ff&#34;&gt;0.3&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#39;2&amp;#39;&lt;/span&gt;: &lt;span style=&#34;color:#ae81ff&#34;&gt;0.1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;trials &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; itertools&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;product(&lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#39;012&amp;#39;&lt;/span&gt;, repeat&lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#ae81ff&#34;&gt;4&lt;/span&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;totals &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; defaultdict(float)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#66d9ef&#34;&gt;for&lt;/span&gt; trial &lt;span style=&#34;color:#f92672&#34;&gt;in&lt;/span&gt; trials:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#75715e&#34;&gt;#how many hoods needed in this trial&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    trial_sum &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; sum(map(&lt;span style=&#34;color:#66d9ef&#34;&gt;lambda&lt;/span&gt; x: int(x), trial))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#75715e&#34;&gt;#figure probability of exact trial occurring&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    total_prob &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#ae81ff&#34;&gt;1.0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#66d9ef&#34;&gt;for&lt;/span&gt; item &lt;span style=&#34;color:#f92672&#34;&gt;in&lt;/span&gt; trial:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        total_prob &lt;span style=&#34;color:#f92672&#34;&gt;*=&lt;/span&gt; probs[item]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#75715e&#34;&gt;#add probability of trial to total prob for this number of hoods needed&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    totals[trial_sum] &lt;span style=&#34;color:#f92672&#34;&gt;+=&lt;/span&gt; total_prob
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#75715e&#34;&gt;#print out all probabilities&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;keys &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; totals&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;keys()
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;keys&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;sort()
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;running_prob &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#ae81ff&#34;&gt;0.0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#66d9ef&#34;&gt;for&lt;/span&gt; i &lt;span style=&#34;color:#f92672&#34;&gt;in&lt;/span&gt; keys:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    running_prob &lt;span style=&#34;color:#f92672&#34;&gt;+=&lt;/span&gt; totals[i]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    print i, running_prob &lt;span style=&#34;color:#f92672&#34;&gt;*&lt;/span&gt; &lt;span style=&#34;color:#ae81ff&#34;&gt;100&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
    </item>
    <item>
      <title>SICP and Lulu.com</title>
      <link>https://nolancaudill.com/2009/03/13/sicp-and-lulu-com/</link>
      <pubDate>Fri, 13 Mar 2009 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2009/03/13/sicp-and-lulu-com/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been wanting to read &lt;a href=&#34;http://mitpress.mit.edu/sicp/&#34;&gt;SICP&lt;/a&gt; for awhile, but with lots of other books on my to-read list, as well as the $50 dollar price tag for a used copy, I&amp;rsquo;ve put it on hold. The price, while relatively steep, usually doesn&amp;rsquo;t stop me from picking up a highly-desired book, but I held off mainly as the book is freely available on their website, under the Creative Commons Attribution-Noncommercial license and this seems like a lot to pay for a free-as-in-beer book.&lt;/p&gt;
&lt;p&gt;Since SICP runs close to 600 printed pages and approximately 40 HTML files , I&amp;rsquo;d rather not read it in my browser and printing it on the home printer is not really an option. I decided that using &lt;a href=&#34;http://lulu.com&#34;&gt;Lulu&lt;/a&gt; might be a workable solution.&lt;/p&gt;
&lt;p&gt;Lulu takes PDFs so step one was to convert the SICP website to one big PDF.&lt;/p&gt;
&lt;p&gt;First, I used wget to mirror the site. Now that I had all the files, I wanted to clean them up a little. Every single page had the previous and next links at the bottom and this was obviously not needed when the pages are in physical form. I ran the following sed command to remove these lines:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sed -i &amp;quot;/[Go to/d&amp;quot; *html&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The next step was to convert the HTML to PDF. I used htmldoc for this particular task.  First, I put all the names of the HTML files in one text file, on one line, and in the correct order. I called this file &amp;ldquo;all_files.txt&amp;rdquo;. The htmldoc command I used to convert to PDF is the following:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;htmldoc -f sicp.pdf --webpage --left .75in --right .75in cat all_files.txt&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I then uploaded this file up to Lulu and designed my (very) simple cover. I made it clear on the back  cover text that I was printing this book under the rights granted by the aforementioned license and would receive no profit from this book with a link back to the original source. I&amp;rsquo;m not a lawyer so I hope that covers all bases.&lt;/p&gt;
&lt;p&gt;Lulu has a convenient feature that will let you do a private printing. I could probably make  this book public, setting my profit to zero, and even though that would be covered under the license, it still feels strange to do.&lt;/p&gt;
&lt;p&gt;I am very curious how this book will turn out. The Lulu process was actually fun and if this turns out well, I could see myself using the service again. Once I get the book, I&amp;rsquo;ll post my reviews of the service and possibly some pictures of the final product. Nonetheless, I&amp;rsquo;m excited to get a print version of this book for a much-reduced price.&lt;/p&gt;
&lt;p&gt;**Edited to add:**Here&amp;rsquo;s the download for the PDF: &lt;a href=&#34;https://nolancaudill.com/files/sicp.pdf&#34;&gt;sicp.pdf&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Keeping the music going</title>
      <link>https://nolancaudill.com/2009/02/22/keeping-the-music-going/</link>
      <pubDate>Sun, 22 Feb 2009 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2009/02/22/keeping-the-music-going/</guid>
      <description>&lt;p&gt;I love listening to last.fm&amp;rsquo;s similar artist radio stations. I constantly forget that the music I am listening to is coming from one of my Firefox tabs, and will accidentally close it from time to time, stopping whatever music is playing.&lt;/p&gt;
&lt;p&gt;I decided to fix this for myself using a ridiculously short Greasemonkey script, using the window object&amp;rsquo;s &amp;ldquo;onbeforeunload&amp;rdquo; function to make sure I actually meant to close the tab/window.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://nolancaudill.com/wp-content/uploads/2009/02/lastfm.user.js&#34;&gt;Here&lt;/a&gt; it is for consumption. This is obviously not a work of genius or of any considerable effort, just a fix for my own absentmindedness.&lt;/p&gt;
&lt;p&gt;**Updated:**I added a little more functionality to the script. There is now a button that will show up in the top right corner that will let you turn off and on the close prompt so that you can browse the last.fm without getting the &amp;lsquo;Are you sure?&amp;rsquo; prompt on every page load.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Something new, Something Clojure</title>
      <link>https://nolancaudill.com/2008/12/30/something-new-something-clojure/</link>
      <pubDate>Tue, 30 Dec 2008 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2008/12/30/something-new-something-clojure/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been shopping for a new programming language to learn the past few months and I&amp;rsquo;ve decided to jump on the functional wagon and expand my mind a little.&lt;/p&gt;
&lt;p&gt;For my day job, I&amp;rsquo;m a PHP and Javascript developer, enjoying the latter more the former, and for personal scripts and side projects, I&amp;rsquo;ve been a Python guy for years.&lt;/p&gt;
&lt;p&gt;I wanted something that was a departure from these more imperative languages which obviously lends itself to the functional languages. At the same time, I wanted something that wasn&amp;rsquo;t purely academic, but I could build real-world software with it that other people could easily use.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been fascinated with Lisp and its S-expressions and the idea of &lt;a href=&#34;http://en.wikipedia.org/wiki/Homoiconicity&#34;&gt;code being data&lt;/a&gt; and vice versa. At the same time the speedy, static-typed languages like OCaml and Haskell were very intriguing.&lt;/p&gt;
&lt;p&gt;After playing with OCaml for a week or two and also working through the first few chapters of &lt;a href=&#34;http://book.realworldhaskell.org/&#34; title=&#34;Real World Haskell&#34;&gt;Real World Haskell&lt;/a&gt; and watching the entertaining Simon Peyton-Jones,  I enjoyed them, but I wasn&amp;rsquo;t infatuated with them. No magic spark, I guess you could say. I think it comes down to me being more inclined to the dynamically-typed languages which seems to cut down one more barrier between me and the implementation of my code.&lt;/p&gt;
&lt;p&gt;So ruling out the OCaml and Haskell brand of typed, functional languages, I found myself working my way back to Lisp. Surveying the Lisp scene, I found myself confronted with quite a few choices of Lisp implementations. I decided to go with Clojure, as from my readings, it seems to be a nice Lisp, even coming from the older Lisp guys and you get the almost endless number of  libraries of Java, as well as  the speedy JVM that runs on all the major platforms. I&amp;rsquo;m sold.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve played with &lt;a href=&#34;http://www.sbcl.org/&#34;&gt;SBCL&lt;/a&gt; going through bits and pieces of &lt;a href=&#34;http://gigamonkeys.com/book/&#34; title=&#34;Practical Common Lisp&#34;&gt;Practical Common Lisp&lt;/a&gt; and I really enjoy the expressiveness and the speed of going from the idea of what you want to do to actually seeing it run. Stuart Holloway (of nearby Chapel Hill) has &lt;a href=&#34;http://blog.thinkrelevance.com/2008/9/16/pcl-clojure&#34;&gt;written&lt;/a&gt; a series of blog posts porting pieces of the PCL code to Clojure which should serve as a nice introductions as well as the &lt;a href=&#34;http://clojure.blip.tv/&#34;&gt;screencasts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ll post updates on new discoveries I make, and any software I decide to work on.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Setting up lighttpd/Apache for Django on Slicehost</title>
      <link>https://nolancaudill.com/2008/02/10/setting-up-lighttpdapache-for-django-on-slicehost/</link>
      <pubDate>Sun, 10 Feb 2008 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2008/02/10/setting-up-lighttpdapache-for-django-on-slicehost/</guid>
      <description>&lt;p&gt;I finally made the jump and moved the websites I was hosting on my home PC and moved them out to &lt;a href=&#34;http://slicehost.com&#34;&gt;slicehost&lt;/a&gt;. Signing up for my slice could not have been easier and so far it has been a flawless experience. The PickledOnion articles were nice to double-check myself to make sure I didn&amp;rsquo;t miss anything.&lt;/p&gt;
&lt;p&gt;I opted for the Ubuntu 7.10 256 MB slice since that was what the sites were priorly hosted on. I decided that I wanted to do things by &lt;a href=&#34;http://djangobook.com&#34;&gt;the book&lt;/a&gt; so I set up a lighttpd server that served my media straight up and funneled all Django requests through Apache/mod_python. I couldn&amp;rsquo;t find an exact way to this previously published so I thought I would add a few code snippets to help others looking to do the same.&lt;/p&gt;
&lt;p&gt;First off, I got the Apache/mod_python setup working. This was pretty much a cut and paste from the deployment docs found in the Django book. Nothing tricky there. By default, Apache runs on port 80, but that will be changed later on.&lt;/p&gt;
&lt;p&gt;The next step was to get lighttpd working properly. The relevant snippet is below:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
$HTTP[&amp;#34;host&amp;#34;] =~ &amp;#34;^(www.)?example.com&amp;#34; {
    $HTTP[&amp;#34;url&amp;#34;] !~ &amp;#34;^/(public|media)/&amp;#34; {
        proxy.server = ( &amp;#34;&amp;#34; =&amp;gt;
            ( (
                &amp;#34;host&amp;#34; =&amp;gt; &amp;#34;127.0.0.1&amp;#34;,
                &amp;#34;port&amp;#34; =&amp;gt; 81
            ) )
        )
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Broken down, this says if we get a request for example.com and the URL does not contain one of our media directories (read: a Django request), proxy it to port 81. This way lighttpd will serve up all the static files and then redirect the Django stuff to our Apache instance that will be listening on port 81.&lt;/p&gt;
&lt;p&gt;After that, change the ports.conf file for your Apache instance to listen on port 81, and the &amp;ldquo;server.port&amp;rdquo; variable in your lighttpd.conf to listen on port 80, and then restart both servers. If everything went correctly, you should now have a 2 servers doing what they do best and have a happy machine to boot.&lt;/p&gt;
&lt;p&gt;I did add a couple of lines to my Apache conf to get better performance.&lt;/p&gt;
&lt;p&gt;First off, I turned off KeepAlive as suggested by &lt;a href=&#34;http://www.jacobian.org/writing/2005/dec/12/django-performance-tips/&#34;&gt;Jacob Kaplan-Moss&lt;/a&gt;. KeepAlive is useful if you are using Apache to serve up several files over one TCP connection, like multiple images on a page load. Since on every page load we are making just one request to Apache (for the HTML itself) and lighttpd is handling all of the static serving (which it is very good at it), KeepAlive helps us none and actually hurts us as Apache will quickly become RAM hungry holding on to your full Django code when there is no need for it.&lt;/p&gt;
&lt;p&gt;Second tuning measure was to bump MaxRequestsPerChild up. I set mine at 500 and saw a big RAM usage drop. This way since these are fairly RAM-heavy Apache processes, they&amp;rsquo;ll get cleaned up once they reach a certain number.&lt;/p&gt;
&lt;p&gt;Overall, I have been very pleased with slicehost and the Django book was very helpful in getting everything up and running.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>jQuery: a new joy</title>
      <link>https://nolancaudill.com/2007/11/07/jquery-a-new-joy/</link>
      <pubDate>Wed, 07 Nov 2007 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2007/11/07/jquery-a-new-joy/</guid>
      <description>&lt;p&gt;At work, I have been working on somewhat complex site with a lot of Javascript going on in the form of some ajax calls and Google Maps. We noticed that the site had an almost painful load time, approximately 8 seconds for a fresh pull of the home page. This was too much.&lt;/p&gt;
&lt;p&gt;The first thing I did was profile it using Firebug&amp;rsquo;s Net tab. Of all the tools in my web developer&amp;rsquo;s tool chest, &lt;a href=&#34;https://addons.mozilla.org/en-US/firefox/addon/1843&#34;&gt;Firebug&lt;/a&gt; is far-and-away my favorite, my oft-used, and my most recommended app. Not only does it give you a super nice DOM browser and a great Javascript debugger, but it also has a nice download profiler under its Net tab. It gives you every single element that the browser requests for your page with its HTTP headers and, even more importantly, the time to complete each transaction.&lt;/p&gt;
&lt;p&gt;The profiling introduced a few things instantly. First, there were a lot of images and some were rather hefty. Since I was building the site from somebody else&amp;rsquo;s design files, and that is definitely not my domain, there was not much I could do about that.&lt;/p&gt;
&lt;p&gt;The second thing was the Google Maps calls. I put the Google request calls at the end so the rest of the page could load before them. Doing just this took about a second and a half off a fresh homepage load. It was a marked improvement, but still nothing spectacular.&lt;/p&gt;
&lt;p&gt;The final thing brings me to the topic of this post. Our standard JS library at work is &lt;a href=&#34;http://dojotoolkit.org&#34;&gt;Dojo&lt;/a&gt;. Dojo is great when you are trying to create a RIA interface with all the bells and whistles, such as tabs, number spinners, rich text editors, grids, and trees. Its widget library, dijit, is quite impressive. An early complaint of dojo was that it was a big download. If I remember correctly it was something like 100kb. The Dojo team fixed this 0.9 with a more modular dependency download structure. Say you needed to only use widget A. You&amp;rsquo;d make the Dojo call in your page for the widget and Dojo would then only download the bits of JS it needed to get that widget on the page. The problem with this is that if you are only using a small subset of the library, you had an exorbitant overhead, not in download weight, but in HTTP requests.&lt;/p&gt;
&lt;p&gt;For example, I was only using Dojo to do some absolute positioning stuff for some help tip bubbles and a few simple Ajax calls. This was like going squirrel hunting with a Scud missile. Even with just using core Dojo (no widgets), I made about a dozen HTTP requests just to get the pieces of JS I needed. In total, it took a little over 2 seconds just to load the JS library, even though it only equated to about 60Kb of total file size.&lt;/p&gt;
&lt;p&gt;These small tasks almost bordered on something that I could do myself, but since I can be a &lt;a href=&#34;http://undefined.com/ia/2006/10/24/the-fourteen-types-of-programmers-type-4-lazy-ones/&#34;&gt;lazy programmer&lt;/a&gt;, I figured a small lightweight JS library that people much smarter than myself have written and tested would be the ticket. I noticed that &lt;a href=&#34;http://jquery.com&#34;&gt;jQuery&lt;/a&gt; was being mentioned more and more in a good way around the web, so I figured this would be the perfect trial to test it out. I can definitely say &amp;lsquo;mission accomplished&amp;rsquo; with jQuery.&lt;/p&gt;
&lt;p&gt;I used the non-gzipped packed version which had a weight of 47Kb. This took around 50ms to download and even better it all came in one HTTP request, with no just-in-time loading. The Ajax API between Dojo and jQuery was similar enough that it took very little modification to port it over and I even minimized some of my Javascript to use the awesome syntax of jQuery. By simply switching to jQuery, I shaved a little over 2 seconds from the page load time with no loss of functionality. Nice.&lt;/p&gt;
&lt;p&gt;After all the profiling and modifying, I managed to cut the page load from a little over 8 seconds to a little over 4 seconds.&lt;/p&gt;
&lt;p&gt;As a side benefit, it was actually &lt;em&gt;fun&lt;/em&gt; to write Javascript. With jQuery&amp;rsquo;s well-document API and function list (something that Dojo definitely lacks last checking), I spent more time Getting Things Done than tinkering and experimenting. I was actually trying to do things to improve the visual effects of the site because jQuery made it simple to do so. It was almost completely painless.&lt;/p&gt;
&lt;p&gt;This is not a slam on Dojo at all, because if you need some advanced UI features, I&amp;rsquo;d definitely go down the path towards Dojo. This was simply using the best tool for the job.&lt;/p&gt;
&lt;p&gt;I am now looking for more excuses to use the tiny, tidy little jQuery library in other spots. jQuery brought joy to Javascript development, something I wouldn&amp;rsquo;t really have thought possible.&lt;/p&gt;
</description>
    </item>
  </channel>
</rss>
