<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts tagged "web-development" - nolan caudill&#39;s internet house</title>
    <link>https://nolancaudill.com/tags/web-development/</link>
    <description>Posts tagged "web-development" on nolan caudill&#39;s internet house</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Fri, 20 Apr 2012 07:00:00 +0000</lastBuildDate>
    <atom:link href="https://nolancaudill.com/tags/web-development/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Moderation Amplification</title>
      <link>https://nolancaudill.com/2012/04/20/moderation-amplification/</link>
      <pubDate>Fri, 20 Apr 2012 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2012/04/20/moderation-amplification/</guid>
      <description>&lt;p&gt;A few nights ago, I came across &lt;a href=&#34;http://twitter.com/tomcoates&#34;&gt;Tom Coates&amp;rsquo;&lt;/a&gt; post titled &lt;a href=&#34;http://www.plasticbag.org/archives/2007/01/social_whitelisting_w/&#34;&gt;Social whitelisting with OpenID&amp;hellip;&lt;/a&gt; about how to handle moderating an online forum when the amount that needs to be moderated outweighs any one person or small group of administrators&amp;rsquo; capabilities (due to time, sanity, etc). This post was written in early 2007 but this system which advocates building a web of trust between your friends was never built as far as I know.&lt;/p&gt;
&lt;p&gt;It did remind me though of a moderation tool that me and couple other tech folks designed and built right around this exact same time for a TV station&amp;rsquo;s website overhaul.&lt;/p&gt;
&lt;p&gt;One golden rule of the web is &amp;ldquo;don&amp;rsquo;t read the comments&amp;rdquo;, because, as nearly an absolute rule it brings out the worst in the worst people. On a news site, this is triple the case. In polite company, religion, politics, and money are things to tread carefully around even with close friends, but with the veil of anonymity that the Internet provides combined with a website that deals daily in stories of religion, politics, and money that often directly impact the reader, it&amp;rsquo;s a damned near-perfect recipe to attract explosive, hateful, and irrational comments.&lt;/p&gt;
&lt;p&gt;As part of this TV site&amp;rsquo;s redesign, there was to be more of focus on contributions from the viewers, which were in the form of photo and video submissions, guest blogging, a simple &amp;ldquo;wall&amp;rdquo; for members to leave comments for one another, and, of course, comments on news articles.&lt;/p&gt;
&lt;p&gt;As a hard rule, beside every piece of member-added content, we always put a link where anyone could report abusive content. This would go into a moderation queue in our administrative tools and our editors could act on it, either marking it as abusive or marking it as &amp;ldquo;seen but okay&amp;rdquo;. For a news site, with millions of visitors a day, this system wasn&amp;rsquo;t manageable, as there would be as many false positives and there would be valid abuse. For some, it was abusive if someone disagreed with them and they couldn&amp;rsquo;t find a way to logically defend their argument. This was overwhelming for a tiny editorial staff.&lt;/p&gt;
&lt;p&gt;So, we had to devise something to at least bubble up the true offenders in the moderation queue. (Now, this was 5-6 long years ago and I&amp;rsquo;m sure it&amp;rsquo;s evolved since I left, but here&amp;rsquo;s how I vaguely remember it working.) The idea that we came up with was this: a member reporting abuse is right if we agree with their judgment, and people that report abuse &amp;ldquo;correctly&amp;rdquo; more often build up a certain amount of us trusting their judgment. If someone reported abuse 100 times and a 100 times we agree with them, there&amp;rsquo;s a really good chance that their next report will be correct as well.&lt;/p&gt;
&lt;p&gt;So we assigned every user a starting trust score and for every time they reported abuse that we deemed valid, we&amp;rsquo;d bump up their trust score. On every abuse report, we&amp;rsquo;d look at the trust score of the person that reported it and if it met some threshold, we&amp;rsquo;d silently remove it from the site. Their abuse report would still exist in the system, but there was less of a time pressure to go through the abuse queue as after awhile, a small army of reliable reporters would be moderating the site for us.&lt;/p&gt;
&lt;p&gt;On the flip side, if an abuse report was deemed wrong, their score would be drastically reduced, halved if I remember correctly. We were fine with the penalty being so severe as good users would build themselves back up and introducing a little chaos into this system was nice as different editors would have slightly different standards, and a lot of the time judging whether something was abuse or not was a judgment call. Chaos was inherent in the system from the start.&lt;/p&gt;
&lt;p&gt;These scores were obviously secret, shown only to the editors, and I honestly can&amp;rsquo;t remember if we were actually doing the silent removals by the time I left, but I do think those reports at least got priority in the moderation queue and when going through thousands of reports, this was incredibly helpful.&lt;/p&gt;
&lt;p&gt;I like to view this kind of system as sort of an ad-hoc Bayesian filter where your moderation efforts are amplified, rewarding and ultimately giving some weight to people that moderate like you do.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;So, the social whitelist begins with allowing a subset of users to post, while the trust score model involves dynamically building a list of good moderators that agree with you on what is abusive content.&lt;/p&gt;
&lt;p&gt;I still love the idea of social whitelisting or building up a set of trusted users to help you with moderating, as both are more organic approaches to moderating, meaning that it&amp;rsquo;s forcing you as a someone in charge of a community to actually make decisions about what kind of discourse you want on your site.&lt;/p&gt;
&lt;p&gt;This is also why it saddens me a bit today as more and more blogs are just dropping in web-wide, generic commenting systems, like Facebook&amp;rsquo;s. While it is enabling almost everyone to be able to quickly log in and start adding comments, it&amp;rsquo;s horrible for the site owners that are trying to build an intimate community. Every decent community probably has a baseline standard of what&amp;rsquo;s acceptable: no hate speech, no physical threats, no illegal content, etc. This is what Facebook provides &amp;ndash; a baseline &amp;ndash; and nothing more.&lt;/p&gt;
&lt;p&gt;Any community worth moderating is nuanced, has a voice and a direction. Facebook doesn&amp;rsquo;t offer this, so every blogger that drops in this commenting system is making that trade-off between ease of user engagement and being able to effectively manage a community. I&amp;rsquo;d like to see more sites go back to these more hands-on and thinking-hard approaches to how to moderate and direct their communities instead of relying on someone else&amp;rsquo;s standards of what constitutes a good contribution.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Thoughts on Pagination</title>
      <link>https://nolancaudill.com/2012/03/24/pagination/</link>
      <pubDate>Sat, 24 Mar 2012 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2012/03/24/pagination/</guid>
      <description>&lt;p&gt;A common navigation pattern on websites is what I call &amp;ldquo;chunked pagination&amp;rdquo;: each page has a predetermined number of pieces of content on them. Page 1 shows items 1-10, page 2 shows 11-20, and so on to the end of the stream. Easy.
This pattern, though incredibly common, isn&amp;rsquo;t useful for navigation most of the time. The major issue is that it gives few hints about where those links will drop you in the stream. It&amp;rsquo;s an arbitrary chunking of how you actually display the content.
Pagination should provide accurate navigation points that reflect the overall ordering of the stream, and pagination based around fixed-length pages provide nothing more than arbitrary access into this ordering, where we have to use estimation and instinct about the distribution of the content in order to make a guess of where a link will send us.
Having a pagination scheme that closely models how a stream is sorted can give you both the casual browsing experience that the numbered pagination provides, as well as powerful navigation abilities that the numbered pagination &lt;em&gt;can&amp;rsquo;t&lt;/em&gt; provide.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;For example, take your average photo site that displays the content in a reverse chronological order: that is, newest to oldest. Let&amp;rsquo;s say your friend has posted 2000 photos to this site. The site shows the viewer 10 things per page. With our prolific user, this gives us 200 pages.
Going to the middle of this content takes us to page 100. What does this mean, beyond we&amp;rsquo;re at the middle? Not much.
Let&amp;rsquo;s say this user posted their first photo to the site years ago, but has just gotten back from a month-long trek through Europe where they took a thousand pictures. So our page numbers monotonically march back 10 by 10, but as we know this stream is sorted by date, and we want to go back in time on their photostream to a dinner you shared 6 months ago, we&amp;rsquo;ll just have to guess which page to start with.
Since we know our friend&amp;rsquo;s usual posting velocity, we think that ten pages should take us a few months back in time, so we go to page 10. On page 10, our friend is in Europe, looking at the River Seine, just 1 week ago. Let&amp;rsquo;s go back 10 more pages. Hm, our friend is still in Europe, admiring the beach at the French Riviera. This is frustrating, so let&amp;rsquo;s try 40 more pages. Click. Damn, our friend is still in Europe (good for him, but bad for your navigation).
After some clicking, we&amp;rsquo;ve got them figured out. We know that page 100 is right before their trip started, so our estimation applies again, disregarding the first 100 pages, we quickly find the dinner pictures.
But, now that our friend is back from their trip, they resume their normal posting volume of 2 or 3 things per week. So, after our friend is back from his trip for a month or two, the first two pages cover a few weeks, while the next 100 pages cover four weeks. The concept of &amp;ldquo;page 100&amp;rdquo; no longer means anything, as this link is very much a moving target as things keep getting posted to the beginning of the stream.
Lots of problems with the page numbers, it seems. First, we have to guess at our friend&amp;rsquo;s posting volume and frequency to even make a stab in the dark of how a particular page number relates to a point in time. Then, in an ideal world where URLs are meaningful, the page numbered link (to use Flickr as an example, &lt;a href=&#34;http://www.flickr.com/photos/nolancaudill/page7/&#34;&gt;http://www.flickr.com/photos/nolancaudill/page7/&lt;/a&gt; is the seventh page of my photostream) doesn&amp;rsquo;t point to any specific resource, beyond that it points roughly to the 126th through the 142nd photos I&amp;rsquo;ve posted and will point to different photos on my next upload. This link is only &amp;ldquo;valid&amp;rdquo; as long as the dataset doesn&amp;rsquo;t change (which datasets tend to do).&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Let&amp;rsquo;s look at the two main ways that people navigate a sorted list.
The first, as we discussed, is by seeking. You know where something happened in a particular sorting (by date, in this example) and you want to get to it. Page numbers do let us narrow it down, but it&amp;rsquo;s usually a guessing game. Go too far back and you have to click back. Not far enough? Click further. Repeat these steps until you narrow down onto the correct page.
The second way people navigate is by browsing, where there&amp;rsquo;s not a specific goal in mind beyond seeing some stuff. For this method, the page numbers are also not necessary. You either want to view the next page, or just jump to some point further in the list. For the former, a simple &amp;ldquo;next&amp;rdquo; link is adequate, and for the latter, you can provide this same action but in a way that makes sense for this case, but also for the &amp;ldquo;seekers&amp;rdquo;.
The way to represent this that satisfies both the seekers and the browsers is to have pagination that actually chunks based around how the list is sorted.
Some examples of this in real-life: dictionaries with the tabs on the page edges that show the alphabet, calendars (one page per month), and encyclopedia sets that have one (or more) books per letter. Dictionaries are actually a good representation of the problem of uneven distribution of the content, as the &amp;lsquo;T&amp;rsquo; and &amp;lsquo;I&amp;rsquo; sections are much thicker than the &amp;lsquo;X&amp;rsquo; and &amp;lsquo;Z&amp;rsquo; sections. Jumping halfway into a dictionary doesn&amp;rsquo;t mean much at all, but having those handy tabs on the edges give you a good head start. Also, even if lots of words are added or removed, jumping to the &amp;lsquo;J&amp;rsquo; tab will always take you to the first &amp;lsquo;J&amp;rsquo; word, regardless of changes in the vocabulary.
On the web, blogs usually have both the seeking and browsing navigation controls. Wordpress, for example, has the &amp;rsquo;next&amp;rsquo; and &amp;lsquo;previous&amp;rsquo; links at the bottom of the main list views, but usually provides a separate archive page that lists all the posts split out by month. Aaron Cope&amp;rsquo;s &lt;a href=&#34;https://github.com/mncaudill/parallel-flickr&#34;&gt;parallel-flickr&lt;/a&gt; has an interface that shows all photos uploaded by your friends in the last day. Instead of using pages, the list is divided up by the uploader (signified by their avatars), which is helpful as I have some friends that post one photo at a time, and others that empty their entire memory card in one fell swoop, but I can successfully navigate both cases.
In all these cases, form and function work together nicely, with the pagination links reflecting how the underlying data is actually laid out, making both seekers and browsers happy. It also creates useful links such as linking to &amp;ldquo;January 2010&amp;rdquo; in a reverse-date ordered photostream will always be constant, regardless of how the data around it changes.
Since it &lt;a href=&#34;https://twitter.com/#!/nolancaudill/status/182952567880429568&#34;&gt;came up on the Twitters&lt;/a&gt;, I should mention the concept of infinite scrolling as a pagination scheme. (Since &lt;a href=&#34;http://twitter.com/blech&#34;&gt;blech&lt;/a&gt; has a protected Twitter account, I don&amp;rsquo;t want to write out his tweet verbatim, but I can summarize it for sake of context by saying he took issue with an instance of infinite scrolling, which can be easily deduced from my reply). Infinite scrolling is basically a pretty representation of the &amp;rsquo;next&amp;rsquo; link that you &amp;lsquo;click&amp;rsquo; by scrolling to the bottom of a page. I&amp;rsquo;ll leave whether or not it&amp;rsquo;s good user experience to others, but as a purely visual experience, I like it. If it&amp;rsquo;s the only source of pagination, that sucks,
and another navigation scheme should be provided if having your users be able to look through the list or find something is important.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;So to wrap this up, how would one create pagination links for our reverse-date order example, which is an incredibly common view? The obvious way is by actually chunking around dates. I believe people that are much better at designing useful things than myself could adapt this into the same form that our current paginate-by-arbitrary-chunk format occupies. I think adapting the links you usually see in an archive view could be represented in a succint form that makes both seeking and browsing easy operations.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>IE6 will never go away (it seems)</title>
      <link>https://nolancaudill.com/2009/07/02/ie6-will-never-go-away-it-seems/</link>
      <pubDate>Thu, 02 Jul 2009 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2009/07/02/ie6-will-never-go-away-it-seems/</guid>
      <description>&lt;p&gt;The Yahoo! User Interface team &lt;a href=&#34;http://yuiblog.com/blog/2009/07/02/gbs-update-20090702/&#34;&gt;updated&lt;/a&gt; their &lt;a href=&#34;http://developer.yahoo.com/yui/articles/gbs/&#34;&gt;Graded Browser Support&lt;/a&gt; and IE6 is still alive as an A-grade browser, and that doesn&amp;rsquo;t appear to be changing anytime soon.&lt;/p&gt;
&lt;p&gt;The Graded Brower Support, or GBS, is the categorization of browsers into different levels that Yahoo! has agreed to support. An A-grade ranking means that the browser has a large enough user base, and that those browsers will receive Yahoo!&amp;rsquo;s highest level of QA and testing support, while the other levels receive little to none. This doesn&amp;rsquo;t mean that Yahoo! sites (or sites that use the &lt;a href=&#34;http://developer.yahoo.com/yui/&#34;&gt;YUI tools&lt;/a&gt;) will be broken in non A-grade browsers, it&amp;rsquo;s just that they won&amp;rsquo;t be QAed as thoroughly. Usually these fringe browsers like Chrome and Opera are &lt;em&gt;more&lt;/em&gt; compliant to standards, and work automatically with valid HTML, CSS, and JavaScript. They don&amp;rsquo;t get the A-grade yet, mainly due to their small user base.&lt;/p&gt;
&lt;p&gt;IE6 is still an A-grade browser, while they have plans to move browsers like Firefox 3.0 to C-grade X-grade status by the end of the year. This is amazing. For comparison, Firefox 3.0 came out about a year ago (ie, June 17, 2008), while IE6 came out in August &lt;em&gt;&lt;strong&gt;2001&lt;/strong&gt;&lt;/em&gt;. When IE6 was released, the WTC towers were still standing, no one had seen any of Lord of the Rings or Harry Potter movies, George Harrison was still alive, and it remained to be seen what kind of president George W. Bush was going to be. If IE6 was a human, it would be starting the 3rd grade soon. To put it in web terms, this was 4 years before the first video was uploaded to YouTube, 2.5 years before anyone Facebooked anyone, and even 3 years before Google went public. The browser&amp;rsquo;s been out awhile.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://nolancaudill.com/images/2001_A_Space_odyssey.gif&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/2001_A_Space_odyssey.gif&#34; alt=&#34;2001: An Ancient Web Browser&#34; title=&#34;2001_A_Space_odyssey&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The fact that Yahoo! has still shown a commitment to ensure that IE6 users get the best experience that Yahoo! can give them means two things — there are a still a lot of IE6 users and the company still makes money off of them. &lt;a href=&#34;http://news.cnet.com/Firefox-users-ignore-online-ads,-report-says/2100-1024_3-5479800.html&#34;&gt;This study&lt;/a&gt; done 5 years ago (though still 3 years after IE6 came out) reported that IE users were 4 times more likely to click an online ad than Firefox users. Since Yahoo! is still courting to IE6 users, this must mean that those users are &lt;em&gt;still&lt;/em&gt; clicking those ads.&lt;/p&gt;
&lt;p&gt;At some point, Yahoo! and the web as a whole will have to drop their A-grade support for IE6. HTML5 and CSS3 are now ready to be consumed with Firefox 3.5, Safari 4, and Chrome 2.0 all available for download. This doesn&amp;rsquo;t mean that developers and designers have to completely break the web for IE6 users, but maybe these users don&amp;rsquo;t get the full experience that the new browsers can deliver. We can do what we can for the IE6 users, but also allow the newer browsers to take advantage of the new, exciting things.&lt;/p&gt;
&lt;p&gt;I like the analogy that YUI has on their browser support page comparing the different browsers to televisions. You have TVs that handle the incoming signal differently, starting with hand-cranked survival radios that only pick up the audio, to black-and-white TVs, all the way to the 1080p Hi-Defs with millions of colors and surround sound. The broadcast stations send out the signal and your TV handles it as best as it can. No one is surprised when a black-and-white TV doesn&amp;rsquo;t have color and IE6 users shouldn&amp;rsquo;t be surprised that their favorite website doesn&amp;rsquo;t look the same as when seen in Firefox. As stated on the GBS page, &amp;ldquo;[s]upport does not mean that everybody gets the same thing.&amp;rdquo; This is a hard fact to digest for companies with a web presence and designers that believe in one true version of their art.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://nolancaudill.com/images/Television.jpg&#34;&gt;&lt;img src=&#34;https://nolancaudill.com/images/Television.jpg&#34; alt=&#34;Hi-Def, 60&amp;rsquo;s style.&#34; title=&#34;Television&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So, why don&amp;rsquo;t IE6 users upgrade their browsers? Simple — they don&amp;rsquo;t know what a browser is. &lt;a href=&#34;http://www.youtube.com/watch?v=o4MwTvtyrUQ&#34;&gt;This video&lt;/a&gt; asks random people the simple question &amp;ldquo;what is a browser?&amp;rdquo;. The people they ask get the answer (sometimes hilariously) wrong, but these people are not stupid. It&amp;rsquo;s just that what browser they use is not important to them. All they care about are the sites they can access with it. And you know what, &lt;em&gt;they&amp;rsquo;re 100% right&lt;/em&gt;. It &lt;em&gt;is&lt;/em&gt; just about the web. The reason that web developers and designers are so passionate about browsers is that we are trying our damnedest to make the most forward-thinking, interactive, immersive sites for these people, but we can&amp;rsquo;t. IE6 &amp;ndash; and its large market share &amp;ndash; won&amp;rsquo;t let us.&lt;/p&gt;
&lt;p&gt;Good news though is that I think this will change soon and IE6 will be nudged to the door by its own creator, Microsoft. The reason that people still use IE6 is because it was already installed on their Windows machines, so the browser changes when Microsoft says it does. Microsoft will soon see what Google saw when Google decided to build Chrome. Google owns the web, and Microsoft owns the browser. Google wanted a piece of that browser pie, as it meant that more people would be enjoying the web in its full capacity, so they created a browser. Microsoft saw this plan and realized if they wanted a slice of the web market they were going to need to put a real contender  into the search market (sorry, MSN), and thus Bing was born.&lt;/p&gt;
&lt;p&gt;If Bing takes off, and I honestly think it will, the incentive will be there for Microsoft to really encourage their users to take advantage of the newest version of IE, and finally put IE6 out to pasture.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Few small edits for clarity.&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    <item>
      <title>twitter-last-status</title>
      <link>https://nolancaudill.com/2009/04/23/twitter-last-status/</link>
      <pubDate>Thu, 23 Apr 2009 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2009/04/23/twitter-last-status/</guid>
      <description>&lt;p&gt;It was a night of firsts.&lt;/p&gt;
&lt;p&gt;I wrote my first Twitter widget. It is the &amp;lsquo;Latest Tweet&amp;rsquo; widget on the far right column of the page that uses Twitter&amp;rsquo;s public JSONP API to pull in my last Twitter update.&lt;/p&gt;
&lt;p&gt;The only thing even mildly interesting is that it has a couple of regexes that finds any &amp;lsquo;@&amp;rsquo; names and links them up and (naively) hooks up any hyperlinks as well. These weren&amp;rsquo;t difficult, but they always take a little bit of tinkering to get right.&lt;/p&gt;
&lt;p&gt;More excitingly, I think, is that I decided to push it out to &lt;a href=&#34;http://github.com/mncaudill/twitter-last-status/tree/master&#34;&gt;github&lt;/a&gt; under a BSD license. This is technically my first open source software, minor as it is.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Helping jQuery Out</title>
      <link>https://nolancaudill.com/2008/10/14/helping-jquery-out/</link>
      <pubDate>Tue, 14 Oct 2008 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2008/10/14/helping-jquery-out/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve made a commitment to myself to start helping out with some of my favorite open source projects. I&amp;rsquo;ve started helping in my own little way by hanging out in the jQuery IRC channel. It&amp;rsquo;s not much effort and I&amp;rsquo;ve already helped a few people, which feels good. I&amp;rsquo;m &amp;lsquo;mncaudill&amp;rsquo; in there, so feel free to say hi.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d eventually like to start contributing code, but I&amp;rsquo;d like to get used to the community first and see how things are done. With such an intentionally lead codebase, contributing actual code might be a challenge, so helping people out in the channel might be the best way to help.&lt;/p&gt;
&lt;p&gt;Next up, I&amp;rsquo;d also like to help out with django by doing the same. With a more extensive codebase where size is not as big of as concern as it is in jQuery, getting a patch in or two would not be as difficult I would imagine.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Setting up lighttpd/Apache for Django on Slicehost</title>
      <link>https://nolancaudill.com/2008/02/10/setting-up-lighttpdapache-for-django-on-slicehost/</link>
      <pubDate>Sun, 10 Feb 2008 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2008/02/10/setting-up-lighttpdapache-for-django-on-slicehost/</guid>
      <description>&lt;p&gt;I finally made the jump and moved the websites I was hosting on my home PC and moved them out to &lt;a href=&#34;http://slicehost.com&#34;&gt;slicehost&lt;/a&gt;. Signing up for my slice could not have been easier and so far it has been a flawless experience. The PickledOnion articles were nice to double-check myself to make sure I didn&amp;rsquo;t miss anything.&lt;/p&gt;
&lt;p&gt;I opted for the Ubuntu 7.10 256 MB slice since that was what the sites were priorly hosted on. I decided that I wanted to do things by &lt;a href=&#34;http://djangobook.com&#34;&gt;the book&lt;/a&gt; so I set up a lighttpd server that served my media straight up and funneled all Django requests through Apache/mod_python. I couldn&amp;rsquo;t find an exact way to this previously published so I thought I would add a few code snippets to help others looking to do the same.&lt;/p&gt;
&lt;p&gt;First off, I got the Apache/mod_python setup working. This was pretty much a cut and paste from the deployment docs found in the Django book. Nothing tricky there. By default, Apache runs on port 80, but that will be changed later on.&lt;/p&gt;
&lt;p&gt;The next step was to get lighttpd working properly. The relevant snippet is below:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
$HTTP[&amp;#34;host&amp;#34;] =~ &amp;#34;^(www.)?example.com&amp;#34; {
    $HTTP[&amp;#34;url&amp;#34;] !~ &amp;#34;^/(public|media)/&amp;#34; {
        proxy.server = ( &amp;#34;&amp;#34; =&amp;gt;
            ( (
                &amp;#34;host&amp;#34; =&amp;gt; &amp;#34;127.0.0.1&amp;#34;,
                &amp;#34;port&amp;#34; =&amp;gt; 81
            ) )
        )
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Broken down, this says if we get a request for example.com and the URL does not contain one of our media directories (read: a Django request), proxy it to port 81. This way lighttpd will serve up all the static files and then redirect the Django stuff to our Apache instance that will be listening on port 81.&lt;/p&gt;
&lt;p&gt;After that, change the ports.conf file for your Apache instance to listen on port 81, and the &amp;ldquo;server.port&amp;rdquo; variable in your lighttpd.conf to listen on port 80, and then restart both servers. If everything went correctly, you should now have a 2 servers doing what they do best and have a happy machine to boot.&lt;/p&gt;
&lt;p&gt;I did add a couple of lines to my Apache conf to get better performance.&lt;/p&gt;
&lt;p&gt;First off, I turned off KeepAlive as suggested by &lt;a href=&#34;http://www.jacobian.org/writing/2005/dec/12/django-performance-tips/&#34;&gt;Jacob Kaplan-Moss&lt;/a&gt;. KeepAlive is useful if you are using Apache to serve up several files over one TCP connection, like multiple images on a page load. Since on every page load we are making just one request to Apache (for the HTML itself) and lighttpd is handling all of the static serving (which it is very good at it), KeepAlive helps us none and actually hurts us as Apache will quickly become RAM hungry holding on to your full Django code when there is no need for it.&lt;/p&gt;
&lt;p&gt;Second tuning measure was to bump MaxRequestsPerChild up. I set mine at 500 and saw a big RAM usage drop. This way since these are fairly RAM-heavy Apache processes, they&amp;rsquo;ll get cleaned up once they reach a certain number.&lt;/p&gt;
&lt;p&gt;Overall, I have been very pleased with slicehost and the Django book was very helpful in getting everything up and running.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>jQuery: a new joy</title>
      <link>https://nolancaudill.com/2007/11/07/jquery-a-new-joy/</link>
      <pubDate>Wed, 07 Nov 2007 08:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2007/11/07/jquery-a-new-joy/</guid>
      <description>&lt;p&gt;At work, I have been working on somewhat complex site with a lot of Javascript going on in the form of some ajax calls and Google Maps. We noticed that the site had an almost painful load time, approximately 8 seconds for a fresh pull of the home page. This was too much.&lt;/p&gt;
&lt;p&gt;The first thing I did was profile it using Firebug&amp;rsquo;s Net tab. Of all the tools in my web developer&amp;rsquo;s tool chest, &lt;a href=&#34;https://addons.mozilla.org/en-US/firefox/addon/1843&#34;&gt;Firebug&lt;/a&gt; is far-and-away my favorite, my oft-used, and my most recommended app. Not only does it give you a super nice DOM browser and a great Javascript debugger, but it also has a nice download profiler under its Net tab. It gives you every single element that the browser requests for your page with its HTTP headers and, even more importantly, the time to complete each transaction.&lt;/p&gt;
&lt;p&gt;The profiling introduced a few things instantly. First, there were a lot of images and some were rather hefty. Since I was building the site from somebody else&amp;rsquo;s design files, and that is definitely not my domain, there was not much I could do about that.&lt;/p&gt;
&lt;p&gt;The second thing was the Google Maps calls. I put the Google request calls at the end so the rest of the page could load before them. Doing just this took about a second and a half off a fresh homepage load. It was a marked improvement, but still nothing spectacular.&lt;/p&gt;
&lt;p&gt;The final thing brings me to the topic of this post. Our standard JS library at work is &lt;a href=&#34;http://dojotoolkit.org&#34;&gt;Dojo&lt;/a&gt;. Dojo is great when you are trying to create a RIA interface with all the bells and whistles, such as tabs, number spinners, rich text editors, grids, and trees. Its widget library, dijit, is quite impressive. An early complaint of dojo was that it was a big download. If I remember correctly it was something like 100kb. The Dojo team fixed this 0.9 with a more modular dependency download structure. Say you needed to only use widget A. You&amp;rsquo;d make the Dojo call in your page for the widget and Dojo would then only download the bits of JS it needed to get that widget on the page. The problem with this is that if you are only using a small subset of the library, you had an exorbitant overhead, not in download weight, but in HTTP requests.&lt;/p&gt;
&lt;p&gt;For example, I was only using Dojo to do some absolute positioning stuff for some help tip bubbles and a few simple Ajax calls. This was like going squirrel hunting with a Scud missile. Even with just using core Dojo (no widgets), I made about a dozen HTTP requests just to get the pieces of JS I needed. In total, it took a little over 2 seconds just to load the JS library, even though it only equated to about 60Kb of total file size.&lt;/p&gt;
&lt;p&gt;These small tasks almost bordered on something that I could do myself, but since I can be a &lt;a href=&#34;http://undefined.com/ia/2006/10/24/the-fourteen-types-of-programmers-type-4-lazy-ones/&#34;&gt;lazy programmer&lt;/a&gt;, I figured a small lightweight JS library that people much smarter than myself have written and tested would be the ticket. I noticed that &lt;a href=&#34;http://jquery.com&#34;&gt;jQuery&lt;/a&gt; was being mentioned more and more in a good way around the web, so I figured this would be the perfect trial to test it out. I can definitely say &amp;lsquo;mission accomplished&amp;rsquo; with jQuery.&lt;/p&gt;
&lt;p&gt;I used the non-gzipped packed version which had a weight of 47Kb. This took around 50ms to download and even better it all came in one HTTP request, with no just-in-time loading. The Ajax API between Dojo and jQuery was similar enough that it took very little modification to port it over and I even minimized some of my Javascript to use the awesome syntax of jQuery. By simply switching to jQuery, I shaved a little over 2 seconds from the page load time with no loss of functionality. Nice.&lt;/p&gt;
&lt;p&gt;After all the profiling and modifying, I managed to cut the page load from a little over 8 seconds to a little over 4 seconds.&lt;/p&gt;
&lt;p&gt;As a side benefit, it was actually &lt;em&gt;fun&lt;/em&gt; to write Javascript. With jQuery&amp;rsquo;s well-document API and function list (something that Dojo definitely lacks last checking), I spent more time Getting Things Done than tinkering and experimenting. I was actually trying to do things to improve the visual effects of the site because jQuery made it simple to do so. It was almost completely painless.&lt;/p&gt;
&lt;p&gt;This is not a slam on Dojo at all, because if you need some advanced UI features, I&amp;rsquo;d definitely go down the path towards Dojo. This was simply using the best tool for the job.&lt;/p&gt;
&lt;p&gt;I am now looking for more excuses to use the tiny, tidy little jQuery library in other spots. jQuery brought joy to Javascript development, something I wouldn&amp;rsquo;t really have thought possible.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Django: My Framework of Choice</title>
      <link>https://nolancaudill.com/2007/10/10/django-my-framework-of-choice/</link>
      <pubDate>Wed, 10 Oct 2007 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2007/10/10/django-my-framework-of-choice/</guid>
      <description>&lt;p&gt;I am a web developer by trade, and like any skilled worker, I am always looking for the best tools for the job. With my current job, I write PHP on top of a custom in-house CMS. Off the clock though, I work with Python and &lt;a href=&#34;http://djangoproject.com&#34; title=&#34;django&#34;&gt;Django&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First off, I like the concept of a framework. With web development, 95% of the stuff you do is stuff you&amp;rsquo;ve done before and unavoidably will do again. A well-written framework alleviates a lot of this. Most of the tasks can be broken down to 2 steps: 1) Grab objects from database. 2) Show them. The visual design of a website is always the fun part to do after you get all the heavy lifting done. There are also things like form handling, user authentication, writing XML feeds, and caching that gets done over and over that frameworks are nice enough to say, &amp;ldquo;hey, I&amp;rsquo;ll take care of this boring, repetitive stuff and let you get on with creating something cool.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;As far as specific frameworks go, I&amp;rsquo;ve read quite a bit about Ruby on Rails and wrote some simple apps to test it out. One reason I didn&amp;rsquo;t grasp on to this one was that I didn&amp;rsquo;t know Ruby. Another reason is that deploying it seemed tricky and you trade speed of deployment for runtime speed. This is true for any interpreted language, but Ruby on Rails takes it to the extreme in both directions. Also it seems &lt;em&gt;too&lt;/em&gt; opinionated at times stepping in the way of what I really wanted to do.&lt;/p&gt;
&lt;p&gt;Another one I&amp;rsquo;ve gave more than a passing glance to is CakePHP. While there are several good ideas in there, and it does increase coding efficiency, it has one big, glaring black mark against and that is it is writting in PHP. PHP is a good language in the regards that it fairly powerful, runs everywhere, runs relatively fast, and everyone knows it, but it is an unattractive and inconsistent language. This &lt;a href=&#34;http://toykeeper.net/soapbox/php_problems/&#34; title=&#34;php&#34;&gt;link&lt;/a&gt; sums it up quite nicely.&lt;/p&gt;
&lt;p&gt;Now onto my favorite: Django. When I first stumbled on this one earlier this year, I amazed by the simplicity of the framework and how it just made sense, from organizational and logical standpoints. There is a very strict model/view/controller demarcation (except Django calls the controller a view, and the view a template) which leaves little debate of what piece of code goes where, which in my opinion is a great thing. It also has lots of super handy things that are built in or are easily added, such as an auto-admin interface, unit and regression testing, modular apps, an easy-to-use templating language, great form handling, and user authentication.&lt;/p&gt;
&lt;p&gt;Far and away my favorite features are the generic views. In my opinion, most of web development is grab an item or a list of items from the database and show them on a page, possibly paginating if there are a lot of them or maybe dividing them up by their publication dates. With generic views, Django takes almost all of this repetitiveness and abstracts it out so you can get on with making it look pretty or adding some cool new feature. Django brings back a lot of the joy of web development that sometimes gets lost in the grind or when writing that same hunk of logic for the hundredth time.&lt;/p&gt;
&lt;p&gt;Not to be one to follow hype, I didn&amp;rsquo;t fully buy into Django until I built a full site with it. Meghan and I built a soon to be launched site for our upcoming wedding. We built it from the ground up (with Django, of course) in the span of about 3 hours. This time included everything from registering the domain to designing the site. It even has a working blog with a commenting system, and a full-featured admin tailored to my exact needs. Django made the easy things &lt;em&gt;really&lt;/em&gt; easy and the hard or repetitive things actually fun to work with.&lt;/p&gt;
</description>
    </item>
    <item>
      <title>Speaking of Django...</title>
      <link>https://nolancaudill.com/2007/10/10/speaking-of-django/</link>
      <pubDate>Wed, 10 Oct 2007 07:00:00 +0000</pubDate>
      <guid>https://nolancaudill.com/2007/10/10/speaking-of-django/</guid>
      <description>&lt;p&gt;Within minutes of my last post about my infatuation with Django, I saw this &lt;a href=&#34;http://venturebeat.com/2007/10/10/curse-a-growing-site-for-online-gamers/&#34;&gt;link&lt;/a&gt;. It appears that Django is doing quite well out there in the real world.&lt;/p&gt;
&lt;p&gt;Curses.com actually uses a lot of my favorite pieces of &lt;a href=&#34;http://about.curse.com/technology/&#34;&gt;software&lt;/a&gt;. Nice.&lt;/p&gt;
</description>
    </item>
  </channel>
</rss>
