Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Comments for The Tyranny of Validation


you write:
You see, the validator is just an implementation of a client that has been explicitly tuned to be as bitchy as possible.

No, the W3C HTML validator is an implementation of a parser that is analyzing the markup to see if it conforms to the basic requirements of HTML and/or XHTML syntax. It's not pretending to be a client, bitchy or not, it's a software tool that can be used to see whether your markup meets the basic requirements of being called HTML or XHTML (or CSS, in the case of the CSS Validator).

While I agree that it is important to allow for experimentation and innovation in the tools we use, and that doing so may run afoul of validators (for the reason that you're no longer writing HTML or XHTML, you're writing deliberately invalid markup for whatever purpose) it should be kept in mind that by doing so, you're striking out on your own. You're throwing away an important tool, and while you may think yourself smart enough to read the output and ignore the few errors caused by that experimentation, your advocacy of abandoning validation is heard by many as an attack on the idea of syntactical correctness, period.

Given that we've fought a long, hard fight to even get browsers that do reasonably similar things when fed valid markup, it's difficult for us to sit by and watch as you choose to deliberately feed those same browsers invalid markup.

Don't like the standard? Change it, or lobby to have it changed, or write your own DTD so that validation as a principle is still valid.

You can argue, correctly or not (depending on the tool you're feeding the possibly invalid markup to) that browsers understand and handle your innovative but invalid markup as expected, but by introducing these innovations you're making it less likely that new browsers, written to the XHTML/XML/HTML specifications to some degree or another, will be able to handle it properly.

By acting on your own outside of the standards process, you're muddying waters we've worked long and hard to try to clear.

I'm no friend of arbitrary academics; the page you point to in your last link makes some good points about how problems remain in the MIME type specifications for XHTML, etc. but it doesn't make any arguments against validation as such, it only debunks some of the more outlandish claims used by some to justify their move to XHTML.

You say you've been around for a long time, and you make pleasant noises about how you learned from the awful tag soup days; you also say that you were a programmer first. I'd like to suggest that you consider what will happen when Dojo is seen to be incompatible with valid XHTML. As an open source project, under the AFL, you've essentially made it possible for anyone to come along and fork the code (say, someone who wants Dojo but not invalid markup). Now, let's say that someone "fixes" Dojo so that it doesn't require invalid markup, but someone else decides that they don't like their approach or yours; then you've got two derivative works. Say some people start using all three Dojo toolkits, and all the support requests come back to the original project. By setting out on the road to requiring invalid markup, in the name of innovation or as a bow to the "reality of the browsers of today" you've effectively recreated exactly the sort of conditions that led to the awful 4.x browser wars, tag soup, and incompatible and conflicting implementations.

Wouldn't it make more sense to accept the conditions of working with a tool that grew out of the Web, the Web of HTML and XML and XHTML, and fix your toolset to recognize those conditions?

You repeat Postel's Law of being "liberal in what you accept", and give examples of how a forgiving network environment allows things to work (though they're pretty weak examples) but, unsurprisingly, given your rhetorical aims in this essay, you do not also hold the second half of the Law you quote: namely, that you should also be conservative in what you produce. Don't ever forget that it is the second half of that maxim that allows this network to operate as well as it does.

I encourage you to rethink your position on validation.

Hi Steve,

A couple of quick notes (more later when I get back from a prior engagment).

Firstly, Dojo doesn't require invalid markup. In fact, our parser works very, very, very hard to normalize whatever you throw at it (namespaces, no namespaces, external documents, whatever) in the service of doing what the developer intended in the first place: building an interface that helps the user accomplish whatever task they need to get done.

Secondly, I'm not against syntactical correctness. It's both innacurate and unfair of you to cast my position of this as anything more than what I clearly laid it out as: a realization that validation (not well-formedness) isn't valuable in the real world. Where we live.

Lastly (for now), the fact that I called the W3C validator a client and not a parser was perhaps overly charitable of me. Were it based on one (or more) real clients, it would be able to tell us actually useful things such as when behaviors differ in the rendering process in addition to it's current savant-like musings.

Will I still use it (and other things) to lint my code? Of course. But I'm also going to continue to ask questions like "and who does that help?". Better that webdevs be forced to think about their chosen profession and what actually benefits users than that they slavishly follow the rules of an ambiguious DTD.

Regards

by alex at
Apologies for any misunderstanding about what Dojo requires or doesn't require; I was going on hearsay. I'll take a better look soon.

I'll just take issue with one thing: you (perhaps rightly) split out the importance of well-formedness from the import of validation. I use custom XML in my projects here, and though I quickly wrote an XML schema for the latest file format I have been using, I understand the usefulness and flexibility of simply using well-formed code during development, and leaving validation for last.

But your audience when you publish a Web page escapes the well controlled environment of development exercises, making validation once again important; bear in mind too that when you're doing custom development, your code is the only code that has to see the inputs, and you can be as liberal or conservative in what you accept as you like.

When you move into the Web, your inputs are suddenly exposed to a wide variety of programs; the lowest common denominator you can expect of them is to deal properly with valid markup. If they don't deal with invalid markup as you hope they will, you have no recourse; that was the underlying basis for the WaSP's campaigns of the past, and while we support innovation, we do so only if the software is also supportive of baseline standards, so that other software can also deal properly with the lingua franca. This presumes valid markup, in my opinion, though some folks hold different opinions.

Anyway - I look forward to your reply, and trust that we can come to better understand each other and our respective positions on the matter.

I believe what Alex is trying to really say here is that the obsession with validation is such that if your site doesn't pass the W3C validator because it uses custom attributes from some other namespace or DTD, it is considered invalid.

This has led to a somewhat ridiculous (in my opinion at least) overloading of css class names and rel tags in ways that they were never intended. And while this behavior is valid, I would argue that it is not better.

So why not create a custom DTD or namespace? Well, for most people that is a lot of effort, or beyond their capabilities (certainly not beyond mine or Alex's, but still, an extra layer of burden). Also, in the early days of building a toolkit, the DTD would change almost daily for a while as new features are added, making it a maintenance nightmare until things stabilize. So by requiring a formal DTD or namespace while you are still in early development of something new, you place a lot of extra burden on the developer.

Hey Steve,

I'll try to keep this short and steer clear of points I've already made (but I'm not really known for doing well at either of those).

You said:

Don’t like the standard? Change it, or lobby to have it changed, or write your own DTD so that validation as a principle is still valid.

Why do you consider it to be an axiom that validation is useful? You still haven't justified that. You keep arguing that if I just doing the valid thing...then...well...I'll validate!

The goal isn't to be the ISO 9001 of web development. It's to build useful things.

You also said:

Given that we’ve fought a long, hard fight to even get browsers that do reasonably similar things when fed valid markup, it’s difficult for us to sit by and watch as you choose to deliberately feed those same browsers invalid markup.

Give me a break. The whole point of this post was to point out that it is in fact NOT hurting you. Or anyone else. If it's hard for you to watch, it's not because anyone serving invalid markup have done anything to harm you. OTOH, people serving inaccessible, unmaintainable, slow to render, unusable content have hurt lots of people (including you).

You rightly point out that you've worked very hard to get the clients to do the right thing, which is exactly what I've been saying is valuable all along. But when useful, non-detrimental extension is being actively frowned upon by the W3C and others, it's time to declare that they've jumped the shark. Does that mean that WaSP is flirting with irrelevance? Probably. When we loose the "better for users" plot, everyone looses.

Steve, standards don't spring out of thin air. They happen because implementations strike out to do something useful and succeed. So am I striking out on my own? You bet. The users of Dojo and the other tools I work on win as a result. And no validator is going to get in the way of that.

So I'm going to put this back on you: you seem to think that pressuring developers to validate is a useful thing to do. Why? Who does it help?

I argue that if you really really want to improve the state of webdev for everyone, you'll put the pressure BACK on the vendors and let the market forces that arise pressure developers into doing things that are better for users.

Regards

by alex at
I would like to expand a little on what Alex said here: So I’m going to put this back on you: you seem to think that pressuring developers to validate is a useful thing to do. Why? Who does it help? ...now, before I go any further, let me just iterate that this is not a "bash Steve Champeon" post; I'm glad to see that he'sreading Alex's blog, which is fairly new, andconcerned enough about Alex's post to comment at length about it.So kudos to you Steve, and please keep the argument going. That being said...while I do understand the idea of WaSP to get the developers to "validate", I have to admit that, as a web application developer (mind you, web application, not web site developer), I have found validation to be absolutely useless as a tool. I design a high-level of interactivity; I've chosen the UAs that will be the targets of my application, because (in general) my application serves a very specific purpose; I will develop and test in said UAs, and make sure my UI works correctly, regardless of whether or not it passes the HTML validator.

This is what Alex is getting at, I think. Not that validation, as a tool is useless; far from it. But that targeted application development should not require complete validation, especially when aimed at agents that already exist and are not going to change dramatically within the next, say, five years.

(and if you think that's an unreasonable timeframe, I would point you to the lifespan of Netscape Navigator 4.x.)

Now, bearing in mind the good work you've (Steve et al) have done, I think Alex is not proposing that strong throw the use of a validator out of the window. Just when you are developing very targeted applications at very specific user agents that may or may not support said standards in full.

OK, apparently you can't use certain tags within WP comments (sigh). Bear in mind that what might not make sense in the above comments has to do with tags not being parsed as I probably expected.

Oh well--sorry, Alex!

Was a pleasure to read Alex. Still monitoring your blog here :)
by trs at
Great post Alex. I'm with you 100%, not surprisingly. I'm also happy that the discourse following this post has stayed professional and thoughtful, as opposed to the discussion (which I eventually closed) on my March To Your Own Standard post. Steve and I like to have food fights every so often, but clearly, this type of conversation is of much more value.

I don't have too much to add at this point because I feel Alex has addressed most criticism most excellently, but I do think this sentence from Steve might illustrate part of why hard-code validatorians have a different stance on this stuff than others:

When you move into the Web, your inputs are suddenly exposed to a wide variety of programs; the lowest common denominator you can expect of them is to deal properly with valid markup.

See, that is what is false about all of this. When a traffic signal turns green, I expect no cars to be coming the other way. That doesn't mean I don't look before walking across the street. In fact, I'd rather cross the street on red while looking both ways than cross it on green without. My point is that on the web -- right now at least -- uniform rendering of valid code is not a reality. Proponents of validation-at-all-costs say it's the best way to debug your pages in a multi-browser world. Maybe it's just the sorts of sites I work on, but rarely are display inconsistencies I notice at all related to the validity of my pages. Instead, they are related to various inconsistencies in the ways different browsers handle the exact same CSS, HTML, XHTML, JS, or whatever else.

I don't think I've ever been stumped by a display problem only to run my page through a validator and find the answer. Instead, it's a search on Google, or Quirksmode, or PositionIsEverything that reveals the answer to the problem.

And just as a final footnote: I am not against valid code. I am for writing valid code... right up until it keeps you from adding useful functionality or getting more important things done.

by Mike D. at
And just as a final footnote: I am not against valid code. I am for writing valid code... right up until it keeps you from adding useful functionality or getting more important things done.

I agree wholeheartedly. I think you might want to take more care to address the context of your complaint, Alex. Casting validation aside in light of its limitations on creative potential is one case, but hacking together an ugly, hardly legible page with nastiness abound is quite another. The latter case is one which I think most standards (and validation) advocates wish to prevent.

Also, a minor picky point: using Postel's Law from a browser's point of view is certainly one way of looking at it (and it seems to work nicely for you), but there is another point of view--that of the author. When you step back to think about it, you realize that the browser isn't the target of your web page or app, the user is. It is the user that needs the consistent experience that comes from being conservative in what you produce. Although browsers have become proficient in processing of tag soup, the aim of validated markup is to isolate your page or app from dependencies on any specific browser behavior--to make it timeless and independent.

That said, good luck with your own standards, sincerely.

While validation often leads people in the wrong direction it has lead so many more in the right direction. If I had never learned of the standards and validation stuff I would still be using flash as the main content holder for website.

I think you are right in saying that it had good affects, but I don't know if we should be going to the length of making our sites not validate on purpose.

Standards serve us, not the other way around. I'm glad to see I'm not the only one that thinks so.

Lately, I've been running into sites served as application/xhtml+xml with broken markup, and wondering when people were going to start becoming pragmatic about the way they build web sites.

by Josh Hughes at
I agree very much with the gist of what you're saying and how it applies to software, but I think you've made a fatal assumption in

your treatment. (X)HTML is not software, it's markup. Unlike software, markup doesn't "do" anything. Rather it delineates discreet

chunks of information into meaningful, semantic pieces. That's the sole purpose of (X)HTML from way back in the days of its humble

origins, no matter what software vendors have done to monkeywrench it into a presentational device. Daresay, it's the deviation from

the "markup" (information delinieation) paradigm into the "software" (HTML should 'do' something) paradigm that has been at the crux of

issues from accessibility to usability from the day that first web developer crawled from the cyber-ooze and placed things inside a

table to make a page "look pretty".

Going back to your Google analogy, yes, I'm sure that the software between my user agent and the Google server has to cut a corner here

or there, but the information it delivers between us is in a rigid, well-defined format. Otherwise, that kernel on my system has no

reference point for reassembling a fragmented packet in a meaningful way. While software can and should be forgiving, information it

conveys cannot and should not be, or that information loses its usefulness. Can you imagine how much fun it'd be to write software for

routers if there were several slightly different and sometimes incompatible versions of an IP packet showing up at your cyber-doorstep,

all dependent on what kind of software flung that packet in your direction?

Since we're on the subject of validation, let's not forget what I actually consider the most critical piece of the puzzle for web

developers, and that's CSS. It also is not software, but a set of rules software should use for interpreting the correct visual/aural

rendering of marked up data. And while I think that valid markup is a very important target for web developers to shoot for, I'm far

more concerned w/ user agents correctly implementing CSS. Let's face it, most of the invalid hacks and markup we employ would be

unnecessary if we could just get the user agents (aural, visual, print, etc.) to render according to standards. THERE'S the rub.

So while I agree that software must by necessity be liberal and forgiving in order to be useful, I disagree that the information

structure transported by that software should be as lenient. Markup should be valid, and if it is going to be rendered in some manner,

those rules should be valid and adhered to by software vendors. After all, what would truly make a software developer's life easier

than knowing they can consistently expect the same structured data and set of rules for interpretation over and over?

Bryce brings up a very important point and one that many people don't fully appreciate: it's *CSS* which is probably much more critical to the health of the web right now than XHTML. There are only so many ways you can mess up with markup... forget to close a tag, forget a quote, whatever. Most people have a pretty good handle on keeping that stuff from happening. But it is the much more subjective and fuzzy art of CSS rendering which provided more room for headaches.

If you need any evidence of this, just head over to Dave Hyatt's excellent weblog Surfin Safari, where when explaining changes in Safari's rendering engine, he will often say things like:

"This particular CSS property is supposed to work this way according to the W3C but it works this way in PC IE, so what we've done is compromise and make it work this way instead so no one is harmed."

I'm paraphrasing, but you get the picture. How CSS renders on-screen is not as cut-and-dried as people may think, even if all specs are followed.

by Mike D. at
And what in the world happened to my post to make it look like I'm writing commment poetry??? LOL
Bryce,

I'm not sure I understand. HTML has been something that "does" something for a very very long time and in fact that's a huge part of it's value. The kinds of apps I work on want to make HTML "do" even more, not less. Should HTML be able to "do" things when it's just showing you a news page? Dunno. But that's not where the action is these days.

The utility of a thing isn't defined by our ability accept how it got there, it's in how much better it makes people's lives. And (d)HTML has the potential to make a lot of things better exactly because it does things.

Regards

by alex at
Whatever happened to the days of code first, spec later if at all? Isn't that bit of the point? I can create an imaginary namespace in XML, come up with some hair brained tags that do something, and once everything is working I can hammer out the schema. Where's the harm?

Oops, I shouldn't have said that on a page that could get caught by Google.

There is only one validation worth any salt: ROI. The "return" may be measured in warm-n-fuzzy ways but for most professional sites it's cash.

As luck would have it, compliance saves cash with both a common language for new developers, and a common language for new platforms. To reap these rewards however you have to: change your staff (yep, that happens) or, have a new platform become a hugely popular way to access the information.

Enter cell phone browsing. I suspect cell phones will run the internet in not too distant future with ~40% of the world owning one and around a fifth being “smartâ€? phones by 2008/9. That’s about 500 million users needing compliant/valid sites. I’m not saying a valid site will immediately cross over to a usable site on a cell but it will cost a fraction less to modify.

by Lance at
Hi Lance,

It's funny, you're making an argument on the same premiss that mine is based on, but you seem to be missing my point entirely.

I am not arguing against a common language for developers or end users. You'll note that I say very clearly that standardization is only good when it lowers costs, which is what you're also saying.

But then you loose the thread. You seem to assume that every handheld device will somehow barf on non-strict input in the year 2008/2009. This article is explicitly to point out the fallacy of this assumption. Even if "compliance" does help, you're advocating for the wrong kind. Well-formedness != validation.

Lance, someone is lying to you. It's either me (and I'm betting on Moore's Law and Postel's Law), or it's whoever sold you some XML-encoded bill of goods.

Your argument is that phone browsers will require strict markup in order to parse things faster with lower resource usage (and if that's not the reason, I'd love to see the counter argument). What I'm pointing out is that that argument is based on well formedness and NOT validation. I guess I could have made that argument more plain, but the fact remains that someone whose argument you bought is either ignorant of the system engineering problems or is lying to you. I'll vote for ignorant.

Either way, time will tell.

And in 2009, I'll still be able to load this blog on my phone.

by alex at
There is a touch of satirical tone in my message which might be why it seems to “miss the pointâ€?. I mostly agree but see the issues merge into a self validating argument for standards based on one assumption: standards become, well, standard.

Of course cell phone browsers will have to deal with old sites that never update or it won’t take off. They will be flexible but that doesn’t mean those sites will be particularly successful.

Standards quickly align with profitability however. A well formed, standards complaint page will be easier/cheaper to modify and make usable new form factors. I.e. menus are better at the very bottom of a page for cells as the person doesn’t have to scroll back to the top - difficult with those little buttons. In XHTML2 we have a new (nav list) object, I bet these will make their way in to the cell phone’s nav buttons and make sites even easier to use.

One of the purposes of the standard is to keep data where data should be and design and logic where they belong. The tag, for instance, helps separate and identify navigation from body links. If you use non standard code (font tags to make an extreme point) your data (page content, not the code around it) will be locked into a format and harder to repurpose.

Yes, right now, it is not as meaningful, as you point out, because 90% (lazy guess) of internet traffic is forgiving screen renderers. (The rest are search engines but they do respond to standard compliance to a degree.) So do you waste your time worrying about the case of your tags when it has no effect on anything but the validator? Answer: yes because Dreamweaver will rewrite your code for you in a single command and the issue is gone, no because you hand code and it will take you longer to read/use the small case.

Which brings me back to my point: effectiveness (profitable) is the standard in the end. Too much fussing with standards is costly and arrogant but so is code that can’t adapt. Balance is demanded and if you break a standard – make a note so you can atone for it later.

by Lance at
testing ... >
by Lance at
Ah ha!

I made reference to "the tag" before. This was a mistake because I didn't use the standrad compliant & lt; or & gt; in my note. I meant to show a:

<nl>

tag.

by Lance at
I find myself agreeing with just about everything point you have made.

Validation is a really great thing. We need the browsers to not care about validating but it's good for us as developers. The biggest problem are these moronic developers who bsae their entire self image on validating, accessibility and all the other key terms. I have plenty of sites that don't validate which work great and will continue to work great so who cares what W3C's validation service says.

Try to make the point that some text just shouldn't be scalable (like if you wanted dynamic text in front of a graphic or something) You get such A##H073 responses from some people. Presentation is what IS important sometimes and every site has different needs and a different audience.. web standards DO NOT cover everyones needs. That is a simple fact of numbers that will always be true.

by Collin at
When setting up a web page all kinds of clients will request it, such as the major web browsers, mobile phones, pdas, search engines, etc. We get our hands on some to test, but hardly all of them. And as time goes on completely new clients appear.

Isn't it reasonable to assume that a web page written in valid XHTML have a higher probability of being rendered as intended, compared to a web page using invalid XHTML? Doesn't this probability motivate a few hours in front of the validator?

Roger,

You're missing the point. The point isn't that XHTML is somehow bad, nor that it may or may not be more widely available on whatever set of devices you want to deploy on. And it has absolutely nothing to do with rendering "as intended". As a web developer. You make suggestions, NOT edicts. Things will render as the USERS want them, not how you intend.

And that IS the point. That the users are the end focus (and always should be), and that validation in particular (not well-formedness) isn't helping either you or me in making user's lives better. Whether or not your personal development tends to implement this will be a measure of your success, and to whatever extent handheld browser manufacturers "get" it will be a measure of theirs.

Inadvertently invalid (i.e., wrong tags in the wrong places) code is a bug, but intentionally invalid XHTML is a feature of both the code and of the medium. So let me propose a little experiment for you Roger:

Go make a tiny XHTML web page, fire it up in a phone, add an invalid attribute or tag to the markup, and then re-load it on your phone...what happens?

Of course it renders. Because your phone isn't checking for validity any more than most deployed XML processors do. And that's with TODAY's CPU and bandwidth available on mobile devices. As they sprout ever-more resources, my argument says that they will get even more lenient.

Phones as a reason for valid XHTML are a straw man, and a shitty one at that.

Someone has been lying to you.

Regards

by alex at
"i.e., wrong tags in the wrong places) code is a bug"

Finding bugs, that is why I use validation. Sometimes I put "wrong tags in the wrong place", misspell an attribute name, etc, and a recursive check with the validator (webval.exe -r http://mysite.com/) will find these kinds of bugs with little effort. “and who does that help?â€? - It helps the user since the web site will likely work better than it did before.

However, I guess I have to agree that validation of intentionally invalid XHTML with a XHTML DTD is pretty useless :)

Remember, the HTML and CSS specs are _recommendations_. They are not essential for a site to be viewed. I can write a page of pure text and not include any HTML tags at all. Guess what - the browser still shows it. However, I believe it is worthwhile supporting the specs and using the validator as much as possible. Otherwise your work becomes sloppy (as happened with HTML in the beginning) and mistakes may cause rendering differences.

It is interesting to read that some people commenting here put the emphasis on the user agent to be responsible for processing bad markup, not the user for writing strict markup that validates. I wonder how faster our browsers would be if they didn't have to process tag soup?

Standards are there for a reason though. XHTML 1.0 has done wonders in improving the quality of code we write. I believe the W3C released it in order to get us thinking about fully valid code that is XML compatible. The reason is a future based more on XML parsers.

The problem with such parsers, as can be seen by serving invalid XHTML sent as xhtml/xml to Firefox, is that they fall over if a single error is present. But surely it must be a lot easier to develop such a parser yourself if you expect only valid code? And yet, history so far has shown us that parsers need to cater for a wide range of code, often broken. In fact, I'd say it's essential, for a reason I often cite, which is the scenario where an XML document fails to fully load via the web. Does the parser give up and show nothing? Shouldn't it at least be able to repair (with some guesswork admittedly) the document and display what it can? I think so, at least for XHTML.

Lastly, please, please, please, please, please can the author learn the correct use of English with regards to the difference between "lose" and "loose". It's an increasingly common and highly irritating mistake I see on websites. There really is no need for it. Perhaps we need a validator for spelling?

finally.

great and excellent article.

makes perect sense.

Thanks for writing and "play on".

Cheers

May I take a few minutes of your time to give the the view of an outsider?

Although I have worked in software development since 1966 I am new to the development of web sites. About three months ago I decided to develop my own website - for a variety of reasons including economy, the need to experiment before choosing the appearance of my site and a strong curiosity about what HTML was "about".

My attitude to standards was set by the consequences of getting it wrong - I'm an engineer who came into IT via the programming of numerically controlled machine tools. The sorts of material we machined were extremely hazardous (radioactive, poisonous, inflammable) and there was no room for mistakes.

We used big mainframes (IBM, ICL, UNIVAC) running in batch mode - online access was rare, typically via a teletype for development and a CAD application needed a £250K (1970 prices) console occupying a full selector channel on an IBM 360/75 (several £ million then).

Before anyone thinks I'm going misty-eyed about the old days I should say that I'm more than happy about the way things have changed. My company has at its disposal now the sort of computing power for which we would have then sold our souls to the devil!

However on to the points I wish to make. I am horrified by the utter chaos of the development environment for web applications and especially for web sites. Most of my time is spent trying to work out why things work in browser (a), partially in browser (b) and not in browser (c), (d) and (e). The "reasons" I uncover seem to be arbitrary and rooted mostly in the differences of approach between Internet Explorer and Netscape and aggravated by the workarounds and hacks developed to deal with these.

This environment seems to have led many site developers to the idea that since "standards" don't matter sloppy practice is OK. I say "seems" since I can't prove this but I'm sure all of you have seen sites for which either carelessness or vanity seems the most obvious factor.

Of course a website should be built for its intended users. It ought to be probable (not just possible) than any user agent works to a known and worthwhile subset of the "standard". It ought also to be built to simplify access - and in the early weeks of my web odessey as I came to understand what this would mean, I adapted my approach accordingly.

Thus I adopted the use of CSS, decided against the use of frames and tables for structuring, aimed for users with small screens and old browsers and avoided the use of deprecated tags etc. I have used the validation service of W3C for both content plus markup and stylesheets as a way of finding my mistakes and of learning from them. I also bought some books that seemed appropriate (Web Design by Jennifer Niederst, CSS by Eric Meyer, and as the need became obvious Javascript by David Flanagan). I have googled widely to find out how things "work" and have solved a variety of problems from a number of what I expect are the widely-known tutorial sites.

So far this has worked well. But I am left with the question, "Does it have to be such a struggle?". I recgnise now that due to the properties of the medium the web design "problem" is different from the "programming" with which I am familiar. Nonetheless there seems to be a worrying theological thread in the discussions of the kind that I find here.

Surely if the user is "king" then meeting his/her needs should be paramount. In this context I see the term "user" as including people like me who want to develop a site which is open to the widest possible audience. Unfortunately I have found this far more difficult than I had expected and for reasons that I can not justify.

I took the point made by an earlier correspondent in this thread - that if packets followed TCP/IP like browsers do the W3C recommendations the net would not be the mass medium it is now. In 1975 I worked on EIN - the first European implementation of ARPA packet switching - and in Europe (one of?) the earliest bits of what has become the Net. Because our project was a demonstrator funded by some 80 organisations, we were obliged to follow the ARPA specification. That was the whole point of what we were doing.

Well the various participants in this conversation have debated knowledgeably and with good humour invoking the user in support of their points.

With respect, may I ask how your debate is helping to sort out the current mess?

bi polar

bi polar, bi polar disorder, bi polar dis orders, manic depression