Loading...

Monday, October 15, 2007

Palm Centro review



Okay, now we know what you're thinking. Sure, we (lovingly) raked Palm over the coals in our open letter to the company, and yeah, we haven't been the sweetest of hearts to the crew from Sunnyvale (with good reason, of course). However, If you've paid attention to our past good-intentioned prodding, then you'll know that getting our hands on a new Palm device still gives some of us geeky chills.

After seeing scores of "leaked" photos of the Centro, and hearing enough internet chatter about the device to make your brain vibrate like a tightly-wound piano string, actually getting our hands on the phone was honestly a bit of a surprise, both bad and good. We're going to break it down piece by piece and hopefully give you a rounded impression of the smartphone crown-chaser (or at least princess-in-waiting).

The design

First off, let's get a few basics out of the way. Yes, the phone is considerably smaller than past Treo devices. Having used a 650, 680, and 750, we can honestly say there is a massive difference between holding this phone in your hands and holding any other Palm device. Is this a good thing? For the most part, yes, though there are drawbacks to its diminutive size, which we'll get to in a moment. But for now, let's talk aesthetics.

The Centro has a rounded, symmetrical design that works without being especially fussy or impressive. We would have liked to see Palm put the real estate to better use with a larger screen and less plastic, but this is certainly a step in the right direction for the company... though a few more steps would have gone a long way.






The major difference beyond the overall width and length is the thickness. The phone is thin, though not as lean as the BlackJack, Q, iPhone, or Pearl (which it most closely relates to in terms of size). No, the fact is this: amongst all of these phones, the Centro is still the fattest, though we couldn't tell you why.





The phone comes in two glossy colors, a cherry red and metallic black (it's actually got silver flecks in it). They're attractive enough, but we continue to take fault with Palm over the gray stripe -- it makes the phone look like a Sony Ericsson from 1999 2001, and serves no purpose as far as we can tell. When Helio designed the Ocean, they used a silver line splitting the sides to create a slimming effect, and if we didn't know better, we'd say that's the impetus for this out-of-place touch.






The screen is a miniature 2-inches, though it looks fantastic at its 320 x 320 resolution and fairly high pixel density. It's impressive for its size, and certainly easy on your eyeballs. We'd again like to congratulate Palm on overcoming the 2-pixel white border surrounding the screen which has plagued the company's devices for as long as we can remember. Kudos.





We know the keyboard is on your mind, so here's the deal: it isn't that great, but it isn't a deal breaker. The phone is designed with the youth market (and women, from what we can tell) in mind, and if that's the case, they should be happy with the full QWERTY of the Centro. The jelly-ish buttons aren't exactly a joyride for us to press, though we've got massive, bear-like claws. The keyboard works; certainly better than T9, and definitely better than no keyboard at all. Still, you'll find yourself backtracking plenty when your nail hits a key next to the letter you meant to press.





The buttons on the "gray stripe" are more standard Treo fare, though their tactile feel on this phone is nonexistent, and we found ourselves re-pressing them constantly. They're too flush, and frankly too big for the purpose they serve. The 4-way rocker is good, however, and should be plenty responsive for anything you'll need it for. This is a good time to nitpick Palm on a design change they made a while back that really rears its ugly head here -- the movement of the "menu" button to the lower right hand corner of the keyboard. Sorry guys, you have to get to drop-down menus too often for it to be relegated to this useless and hard-to-reach corner. Fix please.





Another flaw which Palm's designers don't seem to get is the sunken screen. Look, do you even use your devices? It's a nerve-rattling pain to try and tap the sides of the touchscreen when you've got it buried seemingly four-inches-deep in the phone. The screen needs to be flush with the surface, or near-to -- this is a maddening and obvious problem which the Centro does nothing to correct. In fact, it seems to be amplified here.






Other than that there are no design surprises. All of the side buttons, sound on / off switch, awkward HotSync port, and 2.5mm headphone jack are in exactly the same place as every other Treo.

The OS

You'd think there wasn't much to say here that hasn't already been said, and you would be mostly correct. We won't bore you by detailing our complaints about Palm's aging (aged, rather) OS, but we will point out a few items of interest.





Firstly, this reviewer, having switched to the 750 and its Windows Mobile interface, had quite a shock returning to the Palm OS. We forgot how fast and responsive it can be, and it was a reminder of why we liked Palm to begin with. We know that WM has a lot more bells and whistles, Symbian is kept current, and the iPhone's OS X iteration is fancy as all get-out, but Palm still shines in a lot of ways. The system is fast, has very low loading times for applications, and makes getting most tasks done crushingly simple.






Of course, you know the trade-offs. This is not current software, and it shows. Palm has gone to the trouble of updating the look and functionality of some apps, like the camera and PTunes, yet most remain staid and ancient in appearance. We don't get it -- why not just give the OS a paint job if you can't rebuild it? Our minds are still boggled by the fact that Palm can't even fix the anti-aliasing on highlighted icons. Call us Ed, we know anxious teenagers just dying to skin your UI.





The company has added a few new apps as they've gone along, bundling the aforementioned PTunes, plus Google Maps, as well as a new IM app, On Demand (a kind of one-stop portal), and of course Sprint TV.





Speaking of, Sprint TV is a nice addition, giving you a pretty wide range of channels to view, with solid EV-DO connections -- though the resolution leaves something to be desired.





The IM app is also a plus, with a simple and straightforward interface that doesn't require much time to get comfortable with.

They also include DataViz's DocumentsToGo, a PDF / Word / Excel editor, but you're still stuck with Blazer for web duties, and the rest of Palm's vintage fare for general tasks. It works... but, bleh.

The phone

What can we say? The phone is good, and the sound quality is solid. Palm equipped the Centro with a nice loud earpiece and speaker, and both do their job admirably. One problem of note is that if you lay this phone on its back during a speakerphone call, you lose about 50-percent of your sound. The effect is almost akin to sweeping a resonant filter down on the signal, like the "underwater" effect you hear in your favorite rave anthems. Point being: keep it on its face (hey, you won't have to worry about scratching that screen!).






The 1.3-megapixel camera is nothing to write home about -- in fact, it's terrifically mediocre. The performance on the camera and camcorder apps is also sluggish to the point of annoyance, but we've learned to not expect too much in this department.

Little details -- like the prompt to add a number you've dialed that isn't stored in your contacts, and the "avoid with SMS" feature for incoming calls -- are Palm hallmarks that still feel plenty helpful.

Wrap-up

The real selling point on this device for a lot of people has been its much-touted $99 price point. Of course, you have to keep in mind that the figure takes into account an "instant discount, mail-in rebate, and qualifying two-year Sprint service agreement." Which means the phone isn't nearly as cheap as it sounds. That said, the fact the offer is on the table is a great move for Palm, and should help push a lot of these out the door.

It would be easy to love this phone, but there are too many minor hang-ups that contribute to an overwhelming sense of letdown. Nostalgic affection aside, it doesn't feel like Palm is taking advantage of the opportunities it has right now. Things like its complicated syncing process (particularly with Macs) don't jive with Palm's bid for the "youth market," who undoubtedly are interested in iTunes-like simplicity

Still, brainy teens, casual tinkerers, and young technophiles of all suits will probably be stoked on the wide variety of options for the money. Power users, early adopters, and those seriously jaded by Palm's inability to really deliver something new might want to look elsewhere.

Hitachi breakthrough: 4TB disks by 2011

When Hitachi -- the first disk manufacturer to go perpendicular and subsequently break the 1TB consumer disk drive barrier -- speaks about advances in hard disk technology, you'd be wise to listen. Today they're touting the world's smallest read-head technology for HDDs. The bold claim? 4TB desktop (3.5-inch) and 1TB laptop (2.5-inch) drives within the next 4 years. The new recording heads are more than 2x smaller than existing gear or about 2,000 times smaller than a human hair. Hmmm, Samsung may have to update their SSD vs. HDD graph after this, eh?

Lumenlab shoves PC inside 42-inch 1080p display, calls it Q



There's all-in-one PCs, and then there's the Q. This behemoth sports a unique identity crisis, as it attempts to pose as an aluminum-framed HDTV while featuring a full-fledged computer within. Nevertheless, the 42-inch Q packs a 1080p panel, compatibility with the Lumenlab's own Hotwire PnP powerline networking technology, a fanless design, 1TB of HDD storage, 2GB of RAM and an Intel Core Duo processor. Unfortunately, details beyond that are fairly slim, but we should get a better idea of specifications and pricing when its ship date draws closer.

K850 all set for release

The greatest camera phone you'll ever own gets official. And ours has just arrived.








The new K850 is official and good to go. [more images]




Sony Ericsson has upped its game again. We've just taken delivery of our K850 and it really is one of the best camera phones we've ever got hold of.


The new cell, which Sony today confirmed would be in stores this month, packs so many features, it's obscene.


The 5-meg snapper, which matches the N95 and G600, is a winner, with auto focus, a Xenon flash and Photo Fix to improve light balance.


Best of all, the lens is properly protected, with the cap sitting inside the body. So no accidentally opening it in your pocket, a la every top notch camera phone on the market.



It's incredibly small too, and still manages to add HSDPA and video at 30 fps. Tasty.


Keep your eyes glued right here for a full gallery and video tomorrow.


Old iPods get new features

Apple’s not bringing its new iPod interface to old models. So hackers have instead!







Take one iPod, add a pinch of new menus, save a fortune on a new model!... [more images]




There’s a good chance you’re disappointed by the new iPods. Not because they’re anything short of brilliant, but because yours now looks a bit dated next to the new interface.


Fear not, however! Intrepid hackers have brought Apple’s shiny new menus to older models, proving you can indeed teach an old ‘Pod new tricks.


You should be warned that the process involves modifying your iPod’s firmware – something that’ll almost certainly void what warranty you’ve got left.



Still, it beats shelling out for a new iPod if there’s still plenty of life in your current model. Head here to download the modified firmware. Happy hacking!


10 Valuable Tips for Creating Your Web Site

Introduction

When looking for ways to build of your web site, even minor steps can make a huge difference. The most helpful information and best content will have little impact without simple protocols that make your Web site easier to use and more visually appealing. This paper focuses on 10 tips you can employ to ensure your web site is effec- tive from the day it goes live.

1. Accessibility

Web site accessibility has recently become a very important issue in the web community. Because of Section 508 of the Rehabilitation Act of 1973, all web sites and pages created by Federal agencies and Federal contrac-tors after June 21, 2001 must comply with its provisions. The purpose of the law is to make web sites accessi-ble to all individuals, including those with disabilities. The World Wide Web Consortium (W3C) developed Web
Content Accessibility Guidelines (May 1999), which expand the scope of Section 508.

This is very important to all web developers, whether you are a government agency, a contractor who does work for the government, or a private firm with its own web site. An accessible web site refers to any content or information provided via an online medium that all individuals (including those with disabilities) could easily access and understand. Disabilities include not only visual impairments, but auditory, cognitive, and physical
impairments as well. They can range from very severe (total blindness, for example) to something as simple as the increasing inability to see contrasts that develop as we age. Greater accessibility means more people can fully utilize your web site’s features.

There are a number of aids available for enhancing accessibility. These range from programs like Jaws or IBM’s Home Page Reader—which read the page aloud for the visually impaired—to sip-and-puff systems for the quadriplegic. It is up to the web developer to create pages that allow these systems to provide an equivalent alternative for these individuals. Think of accessing a web page as being similar to a play or opera. If you read the script without benefit of the actors’ interpretation, lights, scenery, or music, you are only experiencing one aspect of the author’s intent. The same is true of web access: if you can only hear the words being read, for example, with no description of the images on the page or other visual components, you would not fully expe-rience the information being provided on the page. That is why, when we include an image, it is important to use the “alt” attribute to provide a description of the image. This enables a screen reader to read this informa-
tion aloud for a visually impaired user.

There are several other tips like using table headers with tables of information, not just placeholders. Be careful of using the colors red and green together because of red/green colorblindness. Use sounds with care. Not everyone can hear, nor does everyone have speakers set up on their computers.

So how do you know if your web site is accessible? Go to http://webxact.watchfire.com and enter the URL for a web page. This free application will test your page and let you know where it does not comply

2. Security

Security is crucial to the success of your web site. There are several steps you can take to minimize the risk that your web sites will be subjected to a breech in security.

Security Updates
Be sure you are running the most current version of your web Server. Monitor your vendor updates, and per-form regular maintenance.

Validate User Input on the Client and the Server
Validating user input on the client is great for user experience. However, you need to validate input on the server side as well. Consider that there are tools that look and feel to your sever like a web site when in fact they are actually designed to fake input such as passwords.

Audit Logs
Maintain and review server logs to check for suspicious activity.

Common Settings
Be sure to minimize the risk to your server by minimizing the things users can do on your server. For example, don’t permit users to browse the directory structure of your site unless it’s necessary.

Lockdown Your Server
Most servers have standard development mode and then a production mode. For example, Microsoft’s Internet Information Server (IIS) has a lockdown utility that minimizes the attack surface for your web site.

3.Web Server Statistics

How many visitors do you have? What pages do they frequent? What times do they log on? Utilize a web tool to assist you in not only collecting these statistics but also analyzing and correlating them. Web tools, such as Web Trends, will aid you in collecting and utilizing this knowledge to answer these questions about your site. Build a web page that not only follows appropriate standards, but also drives repeat visitations.

4. Dynamic Technologies Styles
Are the pages within your site beginning to feel and act a little plain? Would you like more ways to format your documents and give users more interaction with them? If so, then your site could make use of languages such as Cascading Style Sheets and JavaScript.

Cascading Style Sheets (CSS) technology gives you more control of page layout and the ability to control the design of multiple pages on your site from a single file. Additionally, CSS allows you to develop more sophisticat- ed layouts, more font schemes, and even more interactivity for your pages than was possible using just HTML.

JavaScript techniques are needed to develop cutting-edge, interactive web sites. From opening windows to image-flipping and form validation, Javascript can help you build exciting, dynamic web pages.

Integrating HTML, JavaScript, and Cascading Style Sheets techniques are collectively known as Dynamic HTML or DOM Scripting. Utilizing all three languages allow you to fully exploit the capabilities of Netscape Communicator, Firefox, and Microsoft Internet Explorer.

5. Efficient Use of Appropriate Design Software

In the past, many web developers eschewed graphical web editing packages and boasted of developing pages using a simple text editor (for example, Notepad). There is still a place for text editors, but efficient designers and developers both use appropriate design software, often manually tweaking the code. Such packages offer a what- you-see-is-what-you-get (WYSIWYG) environment for designers and code-writing tools for developers. These have
the software-complete repetitive steps, and let designers and developers focus on what they do best.

There are many options, but here are some of the major ones:
Dreamweaver (Adobe, formerly Macromedia)
• The most popular package
• Offers both design (layout) and development (programming) support
• Supports all major server-side scripting languages (ColdFusion, ASP, PHP, JSP), Javascript, XML, and

ASP.NET (VB.NET and C#)
• Integrates well with Adobe Flash and Adobe Fireworks (both formerly Macromedia)

GoLive (Adobe)
• Offers both design (layout) and development (programming) support
• Supports several scripting and markup languages (PHP, JavaScript, SVG-t, SMIL)
• Integrates well with long-standing Adobe products (Photoshop, Illustrator, InDesign)

FrontPage (Microsoft)
• Offers both design (layout) and development (programming) support
• Supports Microsoft JScript and ASP.NET (VB.NET and C#)
• Integrates well with Visual Studio .NET and the Microsoft Office Suite

Microsoft Visual Studio (Microsoft)
• Primarily provides development (programming) support
• Integrated Design Environment (IDE) for developing in .NET environment
• Supports .NET languages (primarily VB.NET and C#, but other extensions for other
languages provided by third parties)

Eclipse (open source)

• Primarily provides development (programming) support
• Integrated Design Environment (IDE) for developing in any environment, but mostly
ommonly used for J2EE
• Supports a multitude of programming languages (not language-specific).


6. Standards and Browser Independence

Web site development has come a long way. There are lots of new tools that will help with web-page design, not to mention web sites that offer suggestions and ideas for making your web site absolutely incredible. Probably the biggest movement is the increasing use of Cascading Style sheets to separate page content from formatting. We are also seeing more sophisticated use of JavaScript to make pages more dynamic and, therefore, more interesting. Along with this, the World Wide Web Consortium (W3C) has instituted an effort to standardize how browsers handle the display of web pages through the use of XHTML. It is hoped that among all of these initiatives, programming for the web will become less a matter of making sure our pages work on all browsers by testing against each one, and more one of creating web pages that are useful, accessible, and exciting.

There are a vast number of resources on the web to help improve web sites. For information about the new XHTML standards, as well as help with Cascading Style Sheets and DOM Scripting (using JavaScript to make your web pages more dynamic), the W3C pages (http://www.w3.org) are invaluable. They include examples and tutorials, both of which are very well done. For some wonderful examples of Cascading Style Sheets, we recommend CSSZenGarden.com (http://www.csszengarden.com). The organization that runs this site supplies an html page with the required content. Designers are invited to create an external style sheet to format the page. New contributions are regularly posted.

Another excellent resource is http://www.dynamicdrive.com. The Internet group on Google groups, formerly Deja, http://groups.google.com/groups is helpful as well. The good news is that you can post a question through Deja/Google groups. Within 24 hours you will usually get one-to-three technically correct answers. Other potential resources include:

http://www.developer.com
This is a solid resource for most scripting/programming languages and is top-notch for Java.
http://www.codehound.com
This is another language resource and is especially helpful with Microsoft .NET technologies.
http://www.4GuysFromRolla.com
This is the definitive place to get ASP or ASP.NET information.
http://www.php.net A good resource for php.
http://www.news.com CNET news.
This site keeps you up-to-date on IT news.
http://www.theinquirer.net
This site provides a hardware outlook for six months to a year-and-a half.

7. Database Access with Server-side Scripting Languages

Static web pages are good place to start, but they quickly can become time intensive and not an efficient use of a designer or developer’s time. Database-driven web sites can refresh their own data, presenting up-to-the- minute data in way manual updates could never do. A dozen or so programmed pages can dynamically change so that they do the job of thousands of static pages. The benefits are clear: more timely information; fewer pages to maintain; and a freeing up of both designers and developers to enhance and further develop the functionality of a site, rather than its content.

But by itself, HTML is not up to this kind of job; that’s not what it was designed to do. A server-side program-
ming language is needed. There are several options, but here are a few of the major products available:

Active Server Pages (ASP) (Microsoft)
• Written using VBScript (server-side JavaScript also possible, but rare)
• Comes installed with Windows servers
• Can run in UNIX/Linux environment using Sun ONE
• Being somewhat overshadowed by ASP.NET

PHP Hypertext Preprocessor (PHP) (open source)
• C-like programming language
• No licensing cost (open-source)
• Can run on UNIX/Linux servers or Windows-based servers
• Close integration with MySQL database

ColdFusion (Adobe)
• Written using tag-based language which integrates well with web-editing software (can also be written using a scripting-like language)
• Easy to learn, quick to create and maintain pages
• Java-based architecture
• Can run on UNIX/Linux server or Windows-based servers

Java 2 Enterprise Edition (J2EE) (Sun)
• Written using cross-platform Java language
• Most often deployed on UNIX/Linux, but can run on any system
• Web page scripting using Java Server Pages (JSP)
• Most appropriate for enterprise-wide installations involving multiple servers, databases, and possibly
mainframe.

ASP.NET (Microsoft)
• Written using VB.NET or C# (other languages possible)
• Deployed in a Windows server environment
• Requires use of Visual Studio .NET for development
• Most appropriate for enterprise-wide installation with other Microsoft solutions

Perl (open source)
• Not really a scripting language (doesn’t co-habitate with HTML)
• Powerful, flexible language, good for dealing with patterns or manipulating data
• Uses less modern Common Gateway Interface (CGI) model
• More appropriate for communication between different applications on a server

Any of these will work with any Relational Database Management System (RDBMS). Here are some common ones:

Oracle (Oracle Corporation)
• Fully featured, flexible, scalable
• Works on UNIX/Linux or Windows servers

SQLServer (Microsoft)
• Fully featured, flexible, scalable
• Works on Windows servers
• Integrates well with .NET

MySQL (open source)
• Fully featured, flexible, scalable
• No licensing costs (open-source)
• Integrates especially well with PHP

Access (Microsoft)
• Friendly user-interface for database management
• Integrates well with MS Office suite
• Not fully featured, limited capabilities, only appropriate for small-scale implementations.

8. Using Image Editors for Fast Loading Graphics

Use Adobe Photoshop/Imageready or Fireworks to create the appropriate type of graphic file Typically GIF or
JPEG. There are other Image Editors available but Photoshop/Imageready and Fireworks are currently the most
popular and are considered the industry standard.


9. Site Planning, Design, and Management

“On time, within scope, and within budget” is the project management motto these days. This is also true for IT projects. IT managers can no longer live in the IT black hole. All IT requires fundamental project manage- ment best practices. Learn how to communicate with your staff. Clarify your project’s business goals to ensure that your project is aiding in the vision and meets your company’s vision. Learn how to follow through your project from inception to implementation. Apply your management skills to the concepts of web design. Apply a strategic focus within your organization to help save time and resources. Learn the benefits of various soft- ware packages to aid in efficiency.

All of these project management practices can help your web site design projects run more smoothly and ensure your web site does what it’s suppose to do.

10.Technological Flexibility

If your web application is Data driven, it is imperative that sharing information with different applications and/or platforms be done in the most flexible way possible. Transforming Data from one format to the next, however, can be arduous and considerably time consuming. Fortunately, storing data in an extensible format, and working with it using XSL, has become relatively easy.


Extensible Markup Langauge (XML) allows developers to store raw data in a text file make up with an HTML-like syntax. With the use of Extensible Style Sheet Transformations (XSL - T), Formatting Objects (XSL - FO), and CSS developers now are able to transform this raw data into an application specific format. Languages com- monly used to augment an XML application are listed below:

• Extensible Markup Langauge (XML): Used to store raw data files
• Document Type Definitions (DTDs): Used to validate XML documents
• eXtensible Stylesheet Language (XSL) and Cascading Style Sheets (CSS): Used to transform the display of XML document into an application specific format. XML utilizes complexities like XPath, functions, modes, and dynamic modification of stylesheets commonly featured in many scripting and programming languages


To Summarize
1. Make sure your web pages are accessible
2. Take steps to minimize security risks
3. Use web server statistics to determine how popular your site is
4. Utilize dynamic technologies styles
5. Be aware of web development software and how to use it efficiently to enhance your web site
6. Use XHTML, DOM Scripting, and CSS to make your site browser and rule independent
7. Choose an appropriate database to ensure it is more dynamic, and does the job of many static pages
8. Use image editors for fast-loading graphics
9. Learn to use planning and project management techniques to build great web sites with large teams
10. Use new technologies like XML, DTDs, and XSL to help your application communicate with other plat-forms in the most flexible manner

If you follow these simple rules, you too can create an excellent web site.

Understanding Domain Name System (DNS)

Domain Name System (DNS) makes it possible to refer to Internet Protocol (IP) based systems (hosts) by human-friendly names (domain names). Name Resolution is the act of determining the IP address (or addresses) of a given host name.

Benefits of DNS
  • Domain names can be logical and easily remembered.
  • Should the IP address for a host change, the domain name can still resolve transparently to the user or application.
The structure of Domain Names
  • Domain names are separated by dots, with the topmost element on the right. Eg: www.yahoo.com . IP addresses have topmost element on the left.
  • Each element may be up to 63 characters long. The entire name may be atmost 255 characters long.
  • The right most element in a domain name is called the Top-Level Domain (TLD). Referring the above example (www.yahoo.com), TLD is 'com'.
  • If a domain name is not shortened, it is called the Fully Qualified Domain Name (FQDN). For example, briefcase.yahoo.com can be specified by a machine in the yahoo.com domain as either briefcase.yahoo.com (FQDN) or as briefcase.
Host names map to IP addresses in a many-to-many relationship. A host name may have one or more IP addresses. Conversely, an IP address may have multiple host names associated with it.

Hosts that are designed to perform email routing are known as mail exchangers. These machines should have special purpose records in DNS called Mail eXchanger (MX) records. When a SMTP server or mail server, needs to send mail to a remote domain, it does a DNS lookup for the Mail Exchanger (MX) of that remote domain. A domain can and should have multiple mail exchangers. Mail that cannot be sent to one mail exchanger, can instead be delivered to an alternative server, thus providing failsafe redundancy.

Different types of Domain Name Servers
  1. Root Name server - Each top level domain (such as in,edu,com etc) has one or more root name servers which are responsible for determining where the individual records are held. These servers are fairly static and every machine on the internet has the capability of reaching any of them. A root name server is allocated like just one to three per country. For example, India has only 2 root name servers.
  2. Authoritative Name Servers - These are the servers that the Root name servers sent queries to. These servers hold the actual information on an individual domain. This information is stored in a file called a zone file. Zone files are updated versions of the original HOSTS.TXT file.
  3. Resolving Name Server - These are the servers that do most of the work when you are trying to get to a machine with a certain host name. Besides being responsible for looking up data, they also temporarily store the data for host names that they have searched out in a cache, which allows them to speed up the resolution for host names that are frequently visited.
Zone
A zone keeps the information about the domain database. It does this by maintaining two types of files:
Zone File - It is used to map host names to address, to identify the mail servers, and to provide other domain information.
Reverse Zone File - This file is responsible for mapping IP address to host names, which is exactly the opposite of what the zone file does.

Note: The zone file and the reverse zone file has to be maintained by the user.

Name Server Hierarchy
Master Name Server - Also called primary server. This contains the master copy of data for a zone.
Slave Name Server - Also known as secondary server. This provides a backup to the master name server. All slave servers maintain synchronization with their master name server.
A zone may have multiple slave servers. But there will be only one master name server per zone.

Apache : Name-based Vs IP Based Virtual Hosting

Often when, you attend interviews for network administration related jobs , the one question you may encounter while discussing about web servers is the difference between name-based and IP based virtual hosting. Here I will explain the difference between the two.

In IP-based virtual hosting, you are running more than one web site on the same server machine, but each web site has its own IP address. In order to do this, you have to first tell your operating system about the multiple IP addresses. See here configuring multiple IP addresses on a single NIC . You also need to put each IP in your DNS, so that it will resolve to the names that you want to give those addresses .

In Name-based virtual hosting, you host multiple websites on the same IP address. But for this to succeed, you have to put more than one DNS record for your IP address in the DNS database. This is done using CNAME tag in BIND. You can have as many CNAME(s) as you like pointing to a particular machine. Of course, you also have to uncomment the NameVirtualHost section in httpd.conf file and point it to the IP address of your machine.

#FILE: httpd.conf
...
NameVirtualHost 192.168.0.1
...

Setting up multiple IP addresses on a single NIC

In linux, you can bind multiple IP addresses on a single NIC. This is usually done in case you are using your linux machine as a webserver and is hosting multiple domains and you want to bind each domain to a unique IP address. This is how it is done.
Let us assume that you already have a NIC which is bound with a static IP address. Then you will have a file called /etc/sysconfig/network-scripts/ifcfg-eth0 .My ifcfg-eth0 file has the following entries:
# File: ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
BROADCAST=192.168.0.255
NETWORK=192.168.0.0
HWADDR=00:80:48:34:C2:84
Now to bind another IP address to the same NIC, I create a copy of the above file ifcfg-eth0 and name it as ifcfg-eth0:1
# cd /etc/sysconfig/networking-scripts
# cp ifcfg-eth0 ifcfg-eth0:1
Now just change the values of the DEVICE and IPADDR in the file as follows:
# File: ifcfg-eth0:1
DEVICE=eth0:1
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.5
NETMASK=255.255.255.0
BROADCAST=192.168.0.255
NETWORK=192.168.0.0
HWADDR=00:80:48:34:C2:84
And lastly, restart the networking service. If you are using RedHat, then it is as simple as :
# service network restart

How to install a Network card in linux

There are different ways of installing a network card in linux - and that too depending on the linux distribution that you are using. I will explain each one of these methods here.
1) The Manual method
First open the computer case and insert the network card into an empty PCI slot. Then boot up your machine to load linux. In linux login as root and then navigate to the directory /lib/modules/kernel_version_number/net/ . Here you will find the modules supported by your system. Assuming that you have a 3Com ethernet card, in which case, the module name is 3c59x , you have to add this in the /etc/modules.conf file to let the machine detect the card each time the machine boots.
#File: /etc/modules.conf
alias eth0 3c59x
Note: If you have only one network card, it is known by the name eth0, the succeeding network cards in your computer go by the name eth1, eth2 ... and so on.
Now you have to load the module into the kernel.
root# /sbin/insmod -v 3c59x
Next configure an IP address for the network card using ifconfig or netconfig or any other method if your machine gets its IP address from a DHCP server. Eg:
root# ifconfig eth0 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255
2) The Easy way
RedHat/Fedora distributions of linux ships with Kudzu a device detection program which runs during systems initialization (/etc/rc.d/init.d/kudzu). This can detect a newly installed NIC and load the appropriate driver. Then use the program /usr/sbin/netconfig to configure the IP address and network settings. The configuration will be stored so that it will be utilized upon system boot.


How to Assign an IP address

Computers may be assigned a static IP address or assigned one dynamically (via DHCP). Here I will explain the steps needed to assign an IP address to your NIC.
Choose one of the following methods:

=> Dynamic Host Configuration Protocol (DHCP) is a protocol used by networked computers (clients) to obtain IP addresses and other parameters such as the default gateway, subnet mask, and IP addresses of DNS servers from a DHCP server.
Command line :
/sbin/ifconfig eth0 192.168.1.3 netmask 255.255.255.0 broadcast 192.168.1.255
GUI tool : You can use the GUI tool /usr/bin/neat - Gnome GUI network administration tool. It handles all interfaces and configures for both static assignment as well as dynamic assignment using DHCP.

Console tool : /usr/sbin/netconfig (Only seems to work for the first network interface eth0 but not eth1,...)

The ifconfig command does NOT store this information permanently. Upon reboot this information is lost. (Manually add the commands to the end of the file /etc/rc.d/rc.local to execute them upon boot.) The command netconfig and /usr/bin/neat make permanent changes to system network configuration files located in /etc/sysconfig/network-scripts/ , so that this information is retained.
The Red Hat configuration tools store the configuration information in the file /etc/sysconfig/network. They will also allow one to configure routing information.
# File: /etc/sysconfig/network
# Static IP address Configuration:
NETWORKING=yes
HOSTNAME=my-hostname # Hostname is defined here and by command hostname
FORWARD_IPV4=true # True for NAT firewall gateways and linux routers. False for
# everyone else - desktops and servers.
GATEWAY="XXX.XXX.XXX.YYY" # Used if your network is connected to another
# network or the internet.

# Gateway not defined here for DHCP.

# Or for DHCP configuration: in the same file /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=my-hostname # Hostname is defined here and by command hostname
# Gateway is assigned by DHCP.
# File: /etc/sysconfig/network-scripts/ifcfg-eth0
# Static IP address configuration:
DEVICE=eth0
BOOTPROTO=static
BROADCAST=XXX.XXX.XXX.255
IPADDR=XXX.XXX.XXX.XXX
NETMASK=255.255.255.0
NETWORK=XXX.XXX.XXX.0
ONBOOT=yes
# OR for DHCP configuration:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
Used by script /etc/sysconfig/network-scripts/ifup to bring the various network interfaces on-line.
To disable DHCP change BOOTPROTO=dhcp to BOOTPROTO=none
In order for updated information in any of these files to take effect, one must issue the command:
root# service network restart

Open Source Law

Columbia University Law Professor Eben Moglen today announced the formation of the Software Freedom Law Center, whose mission is to provide pro-bono legal services globally to eligible non-profit open source software projects and developers.

"As the popularity and use of free and open source software increases and proprietary software development models are threatened, providing necessary legal services to open source developers is becoming increasingly important to prevent liability and other legal issues from interfering with its success," Moglen said. "The Law Center is being established to provide legal services to protect the legitimate rights and interests of free and open source software projects and developers, who often do not have the means to secure the legal services they need."

OSDL has raised more than $4 million for a newly-established IP fund that will provide the seed money for the new and independent legal center based in New York. Last year, OSDL announced a separate $10 million Linux Legal Defense Fund to provide legal support for Linus Torvalds and end user companies subjected to Linux-related litigation by the SCO Group. The new Law Center announced today will be an independent organization not affiliated with OSDL.

"OSDL is committed to supporting initiatives such as the Law Center to help protect the legitimate development and use of Linux and open source software," said Stuart Cohen, CEO of OSDL. "We encourage other companies and organizations like OSDL who are dedicated to securing the future of open source software to contribute to the Law Center and participate in its good works."

Overseeing the Law Center will be a distinguished board of directors comprised of Moglen; Diane Peters, General Counsel at OSDL; Daniel Weitzner, Principal Research Scientist at MIT's Computer Science and Artificial Intelligence Laboratory; and Lawrence Lessig, Stanford Law Professor and author.

"Both free and open source software face many emerging legal threats," said Lessig. "We should be skeptical of legal mechanisms that enable those most threatened by the success of open source and free software to resist its advance. The Law Center will serve as important support for the free and open source communities and for those that benefit from free and open source software."

Moglen, regarded as one of the world's leading experts on copyright law as applied to software, will run the new Law Center from its headquarters in New York City. The Law Center will initially have two full-time intellectual property attorneys on staff and expects to expand to four attorneys later this year. Initial clients for the Law Center include the Free Software Foundation and the Samba Project.

"Free software projects often face legal issues that need expert advice, but it can sometimes be difficult or prohibitively costly to obtain that advice through traditional legal channels," said Andrew Tridgell, head of the Samba project. "We are delighted that the Free Software Law Center is being setup under Eben Mogeln's excellent guidance. I think this is an important milestone in the maturity of the free software community."

Legal services provided to eligible individuals and projects include asset stewardship, licensing, license defense and litigation support, and legal consulting and lawyer training. The Law Center will be software license neutral and intends to participate directly in work currently underway around revisions to the GNU General Public License (GPL) with the Free Software Foundation. The Law Center will also work on issues around the proliferation of open source licenses.

The Law Centeris dedicated to assisting non-profit open source developers and projects who do not otherwise have access to necessary legal services.

OSDL is dedicated to accelerating the growth and adoption of Linux in the enterprise.

Ajax Tutorials for Beginners

This is the first in a three-part series of the top Ajax tutorials from around the web. Part one is for beginners, how to build your first ajax application, and understanding how it works. Part two will be for novices who have experience with ajax, but would like to take their skills to the next level. And part three will be for experts who want to... build their own gmail application? Why not. So we'll get started from the very beginning.

What is Ajax? Ajax stands for Asynchronous JavaScript and XML. In a nutshell, it is the use of the nonstandard XMLHttpRequest() object to communicate with server-side scripts. It can send as well as receive information in a variety of formats, including XML, HTML, and even text files. Ajax’s most appealing characteristic, however, is its “asynchronous” nature, which means it can do all of this without having to refresh the page. This allows you to update portions of a page based upon user events. You can: Make requests to the server without reloading the page and Parse and work with XML documents.

Our first tutorial is from the Mozilla development page. This is the number one example because it provides you will all the information you need to make a basic "hello world" ajax app. By the end of this article you will be able to make simple ajax application on your own.

Our next tutorial is from W3schools. Here we take what you learned in the previous article and we incorporate it into forms. So you can put together a log-in or registration form, and check for valid credentials without navigating away from the current page. Very convenient. The overall trick is not to put your input variables into a form tag. and your submit button should contain an onSubmit="javascript:function()"> so instead of directing the user to a new page, the javascript call will check the credentials in real time.

Now let's migrate into XML a little bit. Web Reference has a great tutorial on the basics of using XML with Ajax. The method they use for parsing XML files is very similar to the method's used in the above two examples, so make sure you totally understand how XMLHttpRequest works.

function ajaxRead(file){
var xmlObj = null;
if(window.XMLHttpRequest){
xmlObj = new XMLHttpRequest();
} else if(window.ActiveXObject){
xmlObj = new ActiveXObject("Microsoft.XMLHTTP");
} else {
return;
}
xmlObj.onreadystatechange = function(){
if(xmlObj.readyState == 4){
updateObj('xmlObj', xmlObj.responseXML.getElementsByTagName('data')[0].firstChild.data);
}
}
xmlObj.open ('GET', file, true);
xmlObj.send ('');
}
function updateObj(obj, data){
document.getElementById(obj).firstChild.data = data;
}


Although javascript is required to complete the Ajax request, you don't have to know a lot of javascript to make good use of Ajax. For example you can make a form, and onSubmit use javascript (document.getElementById()) to get the content of the input id's, and send them to a php file. Let PHP parse all the data, and return the result to javascript to display. Very minimal javascript is required, but if you know javascript is can be extremely helpful when you get into novice and advanced tutorials. Developer.com has a great article called "Ajax from Scratch" that shows different methods than the above ones, for making ajax requests. They also have a lot of more advanced javascript helping them do this, so if you are a fan of (or at least understand) javascript, check it out.

1 function Mutex( cmdObject, methodName ) {
2 // define static variable and method
3 if (!Mutex.Wait) Mutex.Wait = new Map();
4 Mutex.SLICE = function( cmdID, startID ) {
5 Mutex.Wait.get(cmdID).attempt( Mutex.Wait.get(startID) );
6 }
7 // define instance method
8 this.attempt = function( start ) {
9 for (var j=start; j; j=Mutex.Wait.next(j.c.id)) {
10 if (j.enter || (j.number (j.number < number ="=" number =" 0;" c =" cmdObject;" methodid =" methodName;" enter =" true;" number =" (new" enter =" false;">


Now the question is, what are you going to use for the back end of the ajax requests? The two most popular methods are ASP.NET and PHP. Personally I prefer PHP whether I'm using ajax or not, but that doesn't mean asp.net doesn't have it's advantages. Ajax Projects has a nice "hello world" example using ASP.NET with your new ajax requests. Before moving on to the PHP back end of things, you should also check out "Ajax and PHP without XmlHttpRequest object" from PHPit, it outlines a nice alternative to using the traditional ajax method.

url = document.location.href;
xend = url.lastIndexOf("/") + 1;
var base_url = url.substring(0, xend);

function ajax_do (url) {
if (url.substring(0, 4) != 'http') {
url = base_url + url;
}
var jsel = document.createElement('SCRIPT');
jsel.type = 'text/javascript';
jsel.src = url;
document.body.appendChild (jsel);
}


Now on to some "good stuff". Lets put what you just learned to work with something useful. First we'll look at creating a comment system for your website. Comments are a great way for users to interact with a website, and often times it's easier to let them add a comment without leaving the page they're on. To do this we'll take a look at the Ajax Feedback Mechanism from iBegin. Although it's pre-written code you can still use and learn from it. There are a lot of advanced effects in this library (fading in and out when adding / removing comments) you will understand the basic concept of taking data from a from, sending it to a backend PHP script, and showing the user the new information (e.g. "Successful log in" or "Failed log in") without leaving the web page. Web Pasties has a similar walk-thru (except no download is required) that goes through form creation and how to store, check and display data from the current page. Both of these articles are great resources to get you going on your first ajax project.

Ajax isn't limited to just input boxes either. For example, phpRiot has a walk-thru that shows you out to create real-time sortable lists. You can click and drag items into a new order (much like you can in Netvibes or Google IG). This is very nifty for sites with lots of content in one area. This way users can decide what they want to see first. The Tool Man takes the last example a few steps further by adding the ability to add multiple lists to one sorting method, add multi-row and column support and more.

That wraps it up for part one of this three-part series. From this article you should know how to create an XMLHttpRequest object, how to grab data from inside text boxes (and other input areas), and how to display that information for the user in real-time without leaving your web page. In part two we will pick up where we left off here, and go into more advanced options, so be sure you know all the information here before reading on!

Mistakes made when developing with Ajax

Using Ajax for the sake of Ajax.

Sure Ajax is cool, and developers love to play with cool technology, but Ajax is a tool, not a toy. A lot of Ajax isn’t seriously needed to improve usability but rather experiments in what Ajax can do or trying to fit Ajax somewhere where it isn’t needed.

Breaking the back button

The back button is a great feature of the standard web site user interface. Unfortunately, the back button doesn’t mesh very well with Javascript. Keeping back button functionality is one reason not to go with a pure Javascript web app.

Keep in mind however that good web design provides the user with everything they would need to successfully navigate your site, and never relies on web browser controls.

Not giving immediate visual cues for clicking widgets

If something I’m clicking on is triggering Ajax actions, you have to give me a visual cue that something is going on. An example of this is GMail loading button that is in the top right. Whenever I do something in GMail, a little red box in the top right indicates that the page is loading, to make up for the fact that Ajax doesn’t trigger the normal web UI for new page loading.

Leaving offline people behind

As web applications push the boundaries further and further, it becomes more and more compelling to move all applications to the web. The provisioning is better, the world-wide access model is great, the maintenance and configuration is really cool, the user interface learning curve is short.

However, with this new breed of Ajax applications, people who have spotty internet connections or people who just don’t want to switch to the web need to be accomodated as well. Just because technology ‘advances’ doesn’t mean that people are ready and willing to go with it. Web application design should at least consider offline access. With GMail it’s POP, Backpackit has SMS integration. In the Enterprise, it’s web-services.
Don’t make me wait

With Firefox tabs, I can manage various waits at websites, and typically I only have to wait for a page navigation. With AJAX apps combined with poor network connectivity/bandwidth/latency I can have a really terrible time managing an interface, because every time I do something I have to wait for the server to return a response. However, remember that the ‘A’ in AJAX stands for ‘Asynchronous’, and the interaction can be designed so that the user is not prevented from continuing to work on the page while the earlier request is processed.

Sending sensitive information in the clear

The security of AJAX applications is subject to the same rules as any web application, except that once you can talk asynchronously to the server, you may tend to write code that is very chatty in a potentially insecure way. All traffic must be vetted to make sure security is not compromised.

Assuming AJAX development is single platform development.

Ajax development is multi-platform development. Ajax code will run on IE’s javascript engine, Spidermonkey (Mozilla’s js engine), Rhino (a Java js implementation, also from Mozilla), or other minor engines that may grow into major engines. So it’s not enough just to code to JavaScript standards, there needs to be real-world thorough testing as well. A major obstacle in any serious Javascript development is IE’s buggy JS implementation, although there are tools to help with IE JS development.

Forgetting that multiple people might be using the same application at the same time

In the case of developing an Intranet type web application, you have to remember that you might have more than one person using the application at once. If the data that is being displayed is dynamically stored in a database, make sure it doesn’t go “stale” on you.

Too much code makes the browser slow

Ajax introduces a way to make much more interesting javascript applications, unfortunately interesting often means more code running. More code running means more work for the browser, which means that for some javascript intensive websites, especially inefficiently coded ones, you need to have a powerful CPU to keep the functionality zippy. The CPU problem has actually been a limit on javascript functionality in the past, and just because computers have gotten faster doesn’t mean the problem has disappeared.

Not having a plan for those who do not enable or have JavaScript.

According to the W3 schools browser usage statistics, which if anything are skewed towards advanced browsers, 11% of all visitors don’t have JavaScript. So if your web application is wholly dependent on JavaScript, it would seem that you have potentially cut a tenth of your audience.

Blinking and changing parts of the page unexpectedly

The first A in Ajax stands for asynchronous. The problem with asynchronous messages is that they can be quite confusing when they pop in unexpectedly. Asynchronous page changes should only ever occur in narrowly defined places and should be used judiciously, flashing and blinking in messages in areas I don’t want to concentrate on harkens back to days of the html blink tag. “Yellow Fade”, “One Second Spotlight” and other similar techniques are used to indicate page changes unobtrusively.

Not using links I can pass to friends or bookmark

Another great feature of websites is that I can pass URLs to other people and they can see the same thing that I’m seeing. I can also bookmark an index into my site navigation and come back to it later. Javascript, and thus Ajax applications, can cause huge problems for this model of use. Since the Javascript is dynamically generating the page instead of the server, the URL is cut out of the loop and can no longer be used as an index into navigation. This is a very unfortunate feature to lose, many Ajax webapps thoughtfully include specially constructed permalinks for this exact reason.

Blocking Spidering

Ajax applications that load large amounts of text without a reload can cause a big problem for search engines. This goes back to the URL problem. If users can come in through search engines, the text of the application needs to be somewhat static so that the spiders can read it.

Asynchronously performing batch operations

Sure with Ajax you can make edits to a lot of form fields happen immediately, but that can cause a lot of problems. For example if I check off a lot of check boxes that are each sent asynchronously to the server, I lose my ability to keep track of the overall state of checkbox changes and the flood of checkbox change indications will be annoying and disconcerting.

Scrolling the page and making me lose my place

Another problem with popping text into a running page is that it can effect the page scroll. I may be happily reading an article or paging through a long list, and an asynchronous javascript request will decide to cut out a paragraph way above where I’m reading, cutting my reading flow off. This is obviously annoying and it wastes my time trying to figure out my place. But then again, that would be a very stupid way to program a page, with or without AJAX.

Inventing new UI conventions

A major mistake that is easy to make with Ajax is: ‘click on this non obvious thing to drive this other non obvious result’. Sure, users who use an application for a while may learn that if you click and hold down the mouse on this div that you can then drag it and permanently move it to this other place, but since that’s it’s not in the common user experience, you increase the time and difficulty of learning the application, which is a major negative for any application. On the plus side, intuitiveness is a function of learning, and AJAX is popularising many new conventions which will become intuitive as time goes by. The net result will be greater productivity once the industry gets over the intuitiveness hump.

Character Sets

One big problem with using AJAX is the lack of support for character sets. You should always set the content character set on the server-side as well as encoding any data sent by Javascript. Use ISO-8859-1 if you use plain english, or UTF-8 if you use special characters, like æ, ø and å (danish special characters) Note: it is usually a good idea to go with utf-8 nowadays as it supports many languages).

Changing state with links (GET requests)

The majority of Ajax applications tend to just use the GET method when working with AJAX. However, the W3C standards state that GET should only be used for retrieving data, and POST should only be used for setting data. Although there might be no noticable difference to the end user, these standards should still be followed to avoid problems with robots or programs such as Google Web Accelerator.

Not cascading local changes to other parts of the page

Since Ajax/Javascript gives you such specific control over page content, it’s easy to get too focused on a single area of content and miss the overall integrated picture. An example of this is the Backpackit title. If you change a Backpackit page title, they immediately replace the title, they even remember to replace the title on the right, but they don’t replace the head title tag with the new page title. With Ajax you have to think about the whole picture even with localized changes.

Problem reporting

In a traditional server-side application, you have visibility into every exception, you can log all interesting events and benchmarks, and you can even record and view (if you wish) the actual HTML that the browser is rendering. With client-side applications, you may have no idea that something has gone wrong if you don’t know how to code correctly and log exceptions from the remotely called pages to your database.

Return on Investment

Sometimes AJAX can impressively improve the usability of an application (a great example is the star-rating feedback on Netflix), but more often you see examples of expensive rich-client applications that were no better than the plain HTML versions.

Mimicing browser page navigation behavior imperfectly

One example of this is blinklist Ajax paging mechanism on the front page. As you click to see another page of links, ajax fills in the next page. Except that if you are used to a browser experience, you probably expect to go to the top of the page when you hit next page, something JavaScript driven page navigation doesn’t do. BlinkList actually anticipates this and tries to counteract by manipulating your scrolling to scroll upwards until you hit the top. Except this can be slow and if you try scrolling down you will fight the upwards scrolling JavaScript and it won’t let you scroll down. But then again, that is very stupid way to program a page, with or without AJAX.

Another Tool

It seems everyone has forgotten that Ajax is just another tool in the toolbox for Web Design. You can use it or not and misuse it or not. The old 80/20 rule always applies to applications (if you cover 80% of what all users want/need then you have a viable app) and if you lose 11% of your audience because they don’t switch on their javascript then you have to ask yourself if changing your app is worth capturing that 11% or stick with 89% that are currently using it and move on to something else. Also web apps should take advantage of all tricks to enable them to function quickly and efficiently. If that means using javascript for some part, Ajax for another and ASP callbacks for a third, so be it.

IE7 and AJAX. IE7 has added native XMLHTTPRequest object

Native XMLHTTPRequest object

I’m excited to mention that IE7 will support a scriptable native version of XMLHTTP. This can be instantiated using the same syntax across different browsers and decouples AJAX functionality from an ActiveX enabled environment.

What is XMLHTTP?

XMLHTTP was first introduced to the world as an ActiveX control in Internet Explorer 5.0. Over time, this object has been implemented by other browsing platforms, and is the cornerstone of “AJAX” web applications. The object allows web pages to send and receive XML (or other data) via the HTTP protocol. XMLHTTP makes it possible to create responsive web applications that do not require redownloading the entire page to display new data. Popular examples of AJAX applications include the Beta version of Windows Live Local, Microsoft Outlook Web Access, and Google’s GMail.

Charting the changes: XMLHTTP in IE7 vs. IE6

In IE6 and below, XMLHTTP is implemented as an ActiveX object provided by MSXML.

In IE7, XMLHTTP is now also exposed as a native script object. Users and organizations that choose to disable ActiveX controls can still use XMLHTTP based web applications. (Note that an organization may use Group Policy or IE Options to disable the new native XMLHTTP object if desired.) As part of our continuing security improvements we now allow clients to configure and customize a security policy of their choice and simultaneously retain functionality across key AJAX scenarios.

IE7’s implementation of the XMLHTTP object is consistent with that of other browsers, simplifying the task of cross-browser compatibility. Using just a bit of script, it’s easy to build a function which works with any browser that supports XMLHTTP:

if (window.XMLHttpRequest){

// If IE7, Mozilla, Safari, etc: Use native object
var xmlHttp = new XMLHttpRequest()

}
else
{
if (window.ActiveXObject){

// ...otherwise, use the ActiveX control for IE5.x and IE6
var xmlHttp = new ActiveXObject("Microsoft.XMLHTTP");
}

}

Note that IE7 will still support the legacy ActiveX implementation of XMLHTTP alongside the new native object, so pages currently using the ActiveX control will not require rewrites.

I look forward to hearing any feedback and suggestions.

How to reduce page loading time.

It is widely accepted that fast-loading pages improve the user experience. In recent years, many sites have started using AJAX techniques to reduce latency. Rather than round-trip through the server retrieving a completely new page with every click, often the browser can either alter the layout of the page instantly or fetch a small amount of HTML, XML, or javascript from the server and alter the existing page. In either case, this significantly decreases the amount of time between a user click and the browser finishing rendering the new content.

However, for many sites that reference dozens of external objects, the majority of the page load time is spent in separate HTTP requests for images, javascript, and stylesheets. AJAX probably could help, but speeding up or eliminating these separate HTTP requests might help more, yet there isn't a common body of knowledge about how to do so.

While working on optimizing page load times for a high-profile AJAX application, I had a chance to investigate how much I could reduce latency due to external objects. Specifically, I looked into how the HTTP client implementation in common browsers and characteristics of common Internet connections affect page load time for pages with many small objects.

Try some of the following:
  • Turn on HTTP keepalives for external objects. Otherwise you add an extra round-trip for every HTTP request. If you are worried about hitting global server connection limits, set the keepalive timeout to something short, like 5-10 seconds. Also look into serving your static content from a different webserver than your dynamic content. Having thousands of connections open to a stripped down static file webserver can happen in like 10 megs of RAM total, whereas your main webserver might easily eat 10 megs of RAM per connection.

  • Load fewer external objects. Due to lower request overhead, one bigger file just loads faster than two smaller ones half its size. Figure out how to globally reference the same one or two javascript files and one or two external stylesheets instead of many; if you have more, try preprocessing them when you publish them. If your UI uses dozens of tiny GIFs all over the place, consider switching to a much cleaner CSS-based design which probably won't need so many images. Or load all of your common UI images in one request using a technique called "CSS sprites".

  • If your users regularly load a dozen or more uncached or uncacheable objects per page, consider evenly spreading those objects over four hostnames. This usually means your users can have 4x as many outstanding connections to you. Without HTTP pipelining, this results in their average request latency dropping to about 1/4 of what it was before.

    When you generate a page, evenly spreading your images over four hostnames is most easily done with a hash function, like MD5. Rather than having all tags load objects from http://static.example.com/, create four hostnames (e.g. static0.example.com, static1.example.com, static2.example.com, static3.example.com) and use two bits from an MD5 of the image path to choose which of the four hosts you reference in the tag. Make sure all pages consistently reference the same hostname for the same image URL, or you'll end up defeating caching.

    Beware that each additional hostname adds the overhead of an extra DNS lookup and an extra TCP three-way handshake. If your users have pipelining enabled or a given page loads fewer than around a dozen objects, they will see no benefit from the increased concurrency and the site may actually load more slowly. The benefits only become apparent on pages with larger numbers of objects. Be sure to measure the difference seen by your users if you implement this.

  • Possibly the best thing you can do to speed up pages for repeat visitors is to allow static images, stylesheets, and javascript to be unconditionally cached by the browser. This won't help the first page load for a new user, but can substantially speed up subsequent ones.

    Set an Expires header on everything you can, with a date days or even months into the future. This tells the browser it is okay to not revalidate on every request, which can add latency of at least one round-trip per object per page load for no reason.

    Instead of relying on the browser to revalidate its cache, if you change an object, change its URL. One simple way to do this for static objects if you have staged pushes is to have the push process create a new directory named by the build number, and teach your site to always reference objects out of the current build's base URL. (Instead of you'd use . When you do another build next week, all references change to .) This also nicely solves problems with browsers sometimes caching things longer than they should -- since the URL changed, they think it is a completely different object.

    If you conditionally gzip HTML, javascript, or CSS, you probably want to add a "Cache-Control: private" if you set an Expires header. This will prevent problems with caching by proxies that won't understand that your gzipped content can't be served to everyone. (The Vary header was designed to do this more elegantly, but you can't use it because of IE brokenness.)

    For anything where you always serve the exact same content when given the same URL (e.g. static images), add "Cache-Control: public" to give proxies explicit permission to cache the result and serve it to different users. If a local cache has the content, it is likely to have much less latency than you; why not let it serve your static objects if it can?

    Avoid the use of query params in image URLs, etc. At least the Squid cache refuses to cache any URL containing a question mark by default. I've heard rumors that other things won't cache those URLs at all, but I don't have more information.

  • On pages where your users are often sent the exact same content over and over, such as your home page or RSS feeds, implementing conditional GETs can substantially improve response time and save server load and bandwidth in cases where the page hasn't changed.

    When serving a static files (including HTML) off of disk, most webservers will generate Last-Modified and/or ETag reply headers for you and make use of the corresponding If-Modified-Since and/or If-None-Match mechanisms on requests. But as soon as you add server-side includes, dynamic templating, or have code generating your content as it is served, you are usually on your own to implement these.

    The idea is pretty simple: When you generate a page, you give the browser a little extra information about exactly what was on the page you sent. When the browser asks for the same page again, it gives you this information back. If it matches what you were going to send, you know that the browser already has a copy and send a much smaller 304 (Not Modified) reply instead of the contents of the page again. And if you are clever about what information you include in an ETag, you can usually skip the most expensive database queries that would've gone into generating the page.

  • Minimize HTTP request size. Often cookies are set domain-wide, which means they are also unnecessarily sent by the browser with every image request from within that domain. What might've been a 400 byte request for an image could easily turn into 1000 bytes or more once you add the cookie headers. If you have a lot of uncached or uncacheable objects per page and big, domain-wide cookies, consider using a separate domain to host static content, and be sure to never set any cookies in it.

  • Minimize HTTP response size by enabling gzip compression for HTML and XML for browsers that support it. For example, the 17k document you are reading takes 90ms of the full downstream bandwidth of a user on 1.5Mbit DSL. Or it will take 37ms when compressed to 6.8k. That's 53ms off of the full page load time for a simple change. If your HTML is bigger and more redundant, you'll see an even greater improvement.

    If you are brave, you could also try to figure out which set of browsers will handle compressed Javascript properly. (Hint: IE4 through IE6 asks for its javascript compressed, then breaks badly if you send it that way.) Or look into Javascript obfuscators that strip out whitespace, comments, etc and usually get it down to 1/3 to 1/2 its original size.

  • Consider locating your small objects (or a mirror or cache of them) closer to your users in terms of network latency. For larger sites with a global reach, either use a commercial Content Delivery Network, or add a colo within 50ms of 80% of your users and use one of the many available methods for routing user requests to your colo nearest them.

  • Regularly use your site from a realistic net connection. Convincing the web developers on my project to use a "slow proxy" that simulates bad DSL in New Zealand (768Kbit down, 128Kbit up, 250ms RTT, 1% packet loss) rather than the gig ethernet a few milliseconds from the servers in the U.S. was a huge win. We found and fixed a number of usability and functional problems very quickly.

    To implement the slow proxy, I used the netem and HTB kernel modules available in the Linux 2.6 kernel, both of which are set up with the tc command line tool. These offer the most accurate simulation I could find, but are definitely not for the faint of heart. I've not used them, but supposedly Tamper Data for Firefox, Fiddler for Windows, and Charles for OSX can all rate-limit and are probably easier to set up, but they may not simulate latency properly.

  • Use Google's Load Time Analyzer extension for Firefox from a realistic net connection to see a graphical timeline of what it is doing during a page load. This shows where Firefox has to wait for one HTTP request to complete before starting the next one and how page load time increases with each object loaded. The Tamper Data extension can offer similar data in less easy to interpret form. And the Safari team offers a tip on a hidden feature in their browser that offers some timing data too.

    Or if you are familiar with the HTTP protocol and TCP/IP at the packet level, you can watch what is going on using tcpdump, ngrep, or ethereal. These tools are indispensible for all sorts of network debugging.

  • Try benchmarking common pages on your site from a local network with ab, which comes with the Apache webserver. If your server is taking longer than 5 or 10 milliseconds to generate a page, you should make sure you have a good understanding of where it is spending its time.

    If your latencies are high and your webserver process (or CGI if you are using that) is eating a lot of CPU during this test, it is often a result of using a scripting language that needs to recompile your scripts with every request. Software like eAccelerator for PHP, mod_perl for perl, mod_python for python, etc can cache your scripts in a compiled state, dramatically speeding up your site. Beyond that, look at finding a profiler for your language that can tell you where you are spending your CPU. If you fix that, your pages will load faster and you'll be able to handle more traffic with fewer machines.

    If your site relies on doing a lot of database work or some other time-consuming task to generate the page, consider adding server-side caching of the slow operation. Most people start with writing a cache to local memory or local disk, but that starts to fall down if you expand to more than a few web server machines. Look into using memcached, which essentially creates an extremely fast shared cache that's the combined size of the spare RAM you give it off of all of your machines. It has clients available in most common languages.

  • (Optional) Petition browser vendors to turn on HTTP pipelining by default on new browsers. Doing so will remove some of the need for these tricks and make much of the web feel much faster for the average user. (Firefox has this disabled supposedly because some proxies, some load balancers, and some versions of IIS choke on pipelined requests. But Opera has found sufficient workarounds to enable pipelining by default. Why can't other browsers do similarly?)