Valid HTML and Valid CSS

This blog (and this site in general) has two nice image buttons at the end of the page, saying that the HTML and CSS are valid. Are they really?

Well I figured out several days ago that they weren't valid. I must've been really bored and I actually clicked the images. Both the HTML and the CSS were invalid. Today I had some time to fix this.

The HTML is now again valid. The tiny problem was caused by BlogEngine itself: an ampersand wasn't encoded. I'm now using a modified dll and I submitted my code changes (all 4 keystrokes of them) to their source code where they're waiting to be approved.

As for the CSS, I think the W3C validator has some bug. It chokes on the border-radius property, which is valid CSS3. I see that some other person has already filed this as a bug last week, so I guess I'll just have to wait.

In any case, I think it's interesting to explore how this validation can become a part of the build process; otherwise you just have two images there that don't mean anything. I didn't find anything ready on the web so I created it myself: NAnt tasks that use the W3C online validation services to validate urls of a site.

The project is called w3c-nant and it contains a single dll with two NAnt tasks: validateHtml and validateCss. The names are self descriptive I think. To use them, first copy the dll into the bin folder of NAnt. In your NAnt build file you can now write instructions like:

 

<validateHtml url="http://www.mysite.com/" />
<validateCss url="http://www.mysite.com/" />

 

Note that by default the build will fail if the W3C reports validation errors. If you don't want that to happen, you can set the attribute failonerror to false:

 

<validateHtml url="http://www.mysite.com" failonerror="false" />

 

NAnt will then consider this a non-fatal error: it will be logged as an error but the build will still be successful.

Also, these tasks will only validate the url that they are given; they don't perform any crawling of any kind. So they don't guarantee that 100% of the site's pages are valid. My suggestion is to use a few urls that you think are representative enough – or you can write some crawler task and integrate it with the validator 🙂

Hope this helps.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s