Thursday 17 October 2013

WYCRMS Part 6. It's OK, the Resilient Partner Can Take Over

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

6. It's OK, It's One of a Resilient Pair

No, it's not.

So I have two Active Directory Domain Controllers. They both run DHCP with non-overlapping scopes, have a fully replicated database and any client gets both servers in a DNS query so load-balancing is pretty good. Only they're not a single service. They may be presented to users as such but they are distinct servers, running distinct instances of software that, apart from sending each other directory updates themselves, don't cooperate. A user locates AD services using DNS, and simply rebooting a server leaves those DNS entries intact. You've now intentionally degraded your service (only half of your nodes are active), without giving your clients a heads-up.

Sure, DNS timeouts will direct them to the surviving node eventually but why would you intentionally degrade anything when it's avoidable? It's only thirty seconds you say? Why is this tolerable to you?

Failover cluster systems are also not exempt. One of the benefits of these clusters  is that a workload can be moved to (proactively) or recovered at (after a failure) another node. Only failover clustering is shared-nothing, so an entire service has to be taken offline before it is started on the other node. Again this involves an outage, and as much as Microsoft have taken pains to make startup and shutdown of e.g. SQL Server much quicker than in the past, other vendors are likely not as forgiving. It's astonishing how quickly unknown systems can come to rely on the one you thought was an island, but suddenly don't know how to handle interruptions.

Only in the case of an actively load-balanced cluster can taking one node down be said to be truly interruption-free. When clients request service, the list of nodes returned does not contain stale entries. When user sessions are important, the alternate node(s) can take over and continue to serve the client without a blip or login prompt. In case you're confused, this is still no reason to shut down a server anyway, refer to the previous five articles if you're leaning that way, and if you're thinking leaving a single node to keep on churning the load then you haven't quite grasped what resilience is there for.

The point of a resilient pair is that it is supposed to survive outages, not as a convenient tool for admins to perform disruptive tasks hoping users won't notice. There's a similar tendency for people to use DR capacity for testing, without considering whether the benefits of that testing are truly greater than the reduction or elimination of DR capacity.

Application presentation clusters (e.g. Citrix XenApp) is a favourite target for reboots, and is the most often-cited area where these reboots are best-practice. Here it is: The only vendor-published document I have found in the last five years current software advocating a scheduled reboot. Citrix' XenDesktop and XenApp Best Practices Guide page 59. It is poor to say the least:

A rolling reboot schedule should be implemented for the XenApp Servers so that potential application memory leaks can be addressed and changes made to the provisioned XenApp servers can be reset. The period between reboots will vary according to the characteristics of the application set and the user base of each worker group. In general, a weekly reboot schedule provides a good starting point.

More imprecise advice is hard to find in technical documents. How exactly does the administrator, engineer or designer know the level of his "potential" exposure to memory leaks? I've spent some time exploring this issue in the previous articles, and I stand by my point - if an administrator tolerates poor behaviour by applications or - worse - the OS itself without making an attempt to actually correct the flaw (e.g. contacting the vendor to demand a quality product), that administrator is negligent, scheduled reboots are a workaround, and nobody can have a reasonable expectation of quality service from that platform.

But most of all: How are you ever going to trust a vendor who has so little faith in their product that it cannot tolerate simply operating? I'm not singling out Citrix here, but their complacency in the face of bad code is shocking. I admire Citrix, so I'm not pleased at this display of indifference. Best practice I guess...

Then we get to sentences two and three of this three-sentence paragraph, which informs our reboot-happy administrator to try a particular schedule without a definitive measure in sight. There's a link on how to set up a schedule and how to minimise interruption while it happens, but not one metric or even a place to find them is proposed. He/she is given a vague "meh, a week?" suggestion with zero justification, apart from being "feels-right"-ish.

If a server fails, it is for a specific reason. Sometimes this is buried deep in kernel code, the result of interactions that can never be meaningfully replicated, or  much more exotic reasons. In most cases however it is because of a precise reason (memory leaks included), and computing is honestly not so hard that these cannot be fixed.

You might tell I'm an open-source advocate, because I firmly believe in reporting bugs. I also believe I ge tto see the response to that bug. I've found some projects to be more responsive than others, but generally if I've found something that is not just broken but damaging I see people hopping to attention - and that's people volunteering.

If you're buying your software from a vendor they have that floor to start from in their response to you. Tolerate nothing less than attention, and get your evidence together before they start pointing fingers.

When you work in a large organisation you realise things have designations and labels for a reason. Resilient pairs are for unanticipated failures, and DR servers are for disasters.

You don't get to hijack a purpose just because it's unlikely it will be needed - they exist precisely for the unlikely.

Previous: WYCRMS Part 5. Nobody Ever Runs a Server That Long

Monday 14 October 2013

WYCRMS Part 5. Nobody Ever Runs a Server That Long

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

5. Nobody Ever Runs a Server That Long

Uptime

IT Engineers can take this stuff really seriously. I was quite proud of my own server at home that ran, uninterrupted, for one year and three months, during which time I upgraded the RAID array without any filesystem disruption, hosted at least twenty different types of VM (from Windows 3.11 to Windows 2008), served media files and MySQL databases, shared photos with Apache and personal files with Samba. Only the fast-moving BTRFS in-kernel driver broke me from that little obsession, but you don't need to run a Unix-variant to get that kind of availability.

Windows admins are simply used to "bouncing" computers to correct a fault. Hey, it works at home right? It's a complacency, a quick-fix, and often in response to an urgent need to restore service - troubleshooting can and often does take a back seat when management is standing over your shoulder.

Since Windows was a command launched at a DOS prompt, the restart has been a panacea, and often a very real one that unfortunately actually works. It almost always adds no insight into the fault's cause. Perhaps there's a locked file you're not aware of preventing a service from restarting, or a routing entry updated somewhere in the past that isn't reapplied when the system starts up again; there are myriad ways that a freshly started server can have a different configuration from the one you shut down, allowing service to resume.

Once a server is built, it is configured with applications and data, then generally tested (implicitly or explicitly) for fitness of purpose. Once that is done, it processes a workload and no more attention is required assuming all tasks such as backups, defragmentation etc are scheduled and succeed. Windows isn't exactly a finite-state machine (in the practical sense), but it is nonetheless a closed system that can only perform a limited set of tasks and failure modes should be easy to predict

Servers are passive things. They serve, and only perform actions when commanded to. Insert the OS installation DVD, run a standard installation, plug in a network cable, and the system is ready for action. Only it's not configured to take action just yet - it's waiting. In this state, I think most engineers would expect it to keep waiting for quite some time - perhaps a year or more. But add a workload - say a database instance or a website, and attitudes change.

I've had frequent discussions with engineers who will tell me things like "this server has been up for over a year, it's time to reboot it". Somewhere between an empty server and a billion-row OLTP webshop is a place where the server goes from idle and calm to something that genuinely scares engineers - just for running a certain amount of time.

When pressed for exactly which component is broken (or likely will) that this mystic reboot is supposed to fix, I never get anything specific, just a vague "it's best practice".

Windows Updates are frequently cited as a reason to reboot servers, and thanks to the specifics of how Windows implements file locking yes the reboot there is unavoidable. This leads to the unfortunate tendency to accept reboots as a normal part of Windows Server operation, but tend to see the reboot as the point (with an update thrown in since the server is rebooting anyway) instead of an unfortunate side-effect. I realise the need to keep servers patched, but again, when pressed for a description of which known defects (that have actually or could probably affected service) a particular update application - with associated downtime - will fix, the response comes in: "Um, best practise?".

In the absence of an actual security threat, known defect fix or imminent power failure, I am rarely convinced to shut a server down. I first included "vendor recommendation" in that list, but realised I've yet to see one. Ever.

Even at three in the morning when no sane customer could be relying on a system, during a once-quarterly change window when all services are nominated unavailable so service providers can make radical changes, even then: No, you can't reboot my server.

If engineers took the time to think about where in the continuum from empty server to complex beast the point of fear arrives, they can figure out which bit is scaring them and make sure those are well-understood, properly configured and maintained.Unfortunately, that takes time, effort and sometimes a bit of theory and modelling. Rebooting is so much less effort.

Windows Server can, and should, be expected to remain ready for service for as long as the hardware can last. With the advent of virtualisation and VMotion, even that obstacle is gone, and the limits are practically nowhere to be found. Applictions are another story, and if the developer/support specialist think they need restarting, that's fine, but they have zero authority to suggest this for Windows.

I've heard the phrase "excessive uptime" identified as the root cause of outages. I doubt Microsoft would like to know the engineers they certify are - and I don't say this lightly - genuinely afraid of their product doing its job, as designed, for years. If that doesn't happen and a genuine OS fault occurs that only a reboot can solve, it is quite shocking how many engineers will actually report this problem to the vendor and tolerate workarounds, design hacks and cludgy scripts.

In the same way that one can learn a procedure for changing the spark plugs on a specific model of engine, while completely missing the black smoke ejected from the exhaust thanks to a chronic ignition timing failure, so too engineers who have not yet attained a mediocre grasp of computing theory can continue to diagnose and treat only symptoms.

A server failing to continue to do its core function of staying up is not a mild symptom that a reboot can fix. It is a fundamental failure of the product, and failing to do the hard thing of actually understanding why and demanding the vendor improves their product does nobody any service.

In fact, it's negligent.

Previous: Part 4. Windows Updates and File Locking

Thursday 10 October 2013

WYCRMS Part 4. Windows Updates and File Locking

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

4. Windows Updates

It's Lab Time!

Open a console on Windows (7, 2008, whatever), and enter the following as a single line (unfortunately wrapped here, but it is a single line):

for /L %a in (1,1,30) do @(echo %a & waitfor /t 1 signal 2>NULL) >numbers

What's that all about you ask? Well, it sets up a loop, a counter (%a) that increases from 1 to 30. On each iteration, the value of %a is echod to the standard output, and WAITFOR (normally a Resource Kit file, included in Windows 7/2008 R2) pauses for one second waiting for a particular signal. 2>NULL just means I'm not interested that the signal never arrives and want to throw away the error message (there's no simple sleep command I can find in Windows available to the console). Finally >numbers sends the output to the file numbers.

As soon as you press enter, the value 1 is written to the first line of a file called numbers. One second later, a line is over-written with the value 2, and so on for thirty seconds.

Now open another console in the same directory while this first is running (you've got thirty seconds, go!) and type (note, type is a command that is typed in, it's not a command for you to start typing):

type numbers & del numbers

If the first command (the for loop) is still running, you'll get the contents of the file (a number), followed by an error when the delete command is attempty - this makes sense as the first loop is still busy writing to the file.

This demonstrates a very basic feature of Windows called File Locking. Put simply, it's a traffic cop for files and directories. The first loop opens a file and writes the first value, then the second, then the third, all while holding a lock on the file. This is a message to the kernel that nobody else is allowed to alter the file (deletes, moves, modifications) until the lock is released, which happens when the first program terminates or explicitly releases the file.

This is great for applications (think editing a Word document that someone else wants to modify), but when it comes time to apply updates or patches to the operating system or applications it can make things very complex. As an example, I have come across a bug in Windows TCP connection tracking that is fixed by a newer version of tcpip.sys, the file that provide Windows with an IP stack. Unfortunately, if Windows is running, tcpip.sys is in use (even if you disable every NIC), so as long as this file is being used by the kernel (always) it can never be overwritten. The only time to do this is when the kernel is not running - but then how do you process a file operation (performed by the kernel), when the kernel is not available.

Windows has a separate state it can enter before starting up completely where it processes pending operations. Essentially, when the update package notices it needs to update a file that is in use it tells Windows that there is a newer version waiting, and Windows places this in a queue. When starting up, if there are any entries in this queue, the kernel executes them first. If these impact the kernel itself (e.g. a new ntfs.sys), the system performs another reboot to allow the new version to be used.

This is the only time a reboot is necessary for file updates. Very often administrators simply forget to do simple things like shut down IIS, SQL or any number of other services when applying a patch for those components. A SQL Server hotfix is unlikely to contain fixes for kernel components, so simply shutting down all SQL instances before running the update will remove the reboot requirement entirely.

Similarly, Internet Explorer is very often left running after completing the download of updates, some of which may apply to Internet Explorer itself. Even though this is not a system component, the file is in use and it is scheduled for action at reboot. Logging in with a stripped-down, administrative privileged account to execute updates removes the possibility that taskbar icons, IE, an explorer right-click extension or anything else is running that may impede a smooth, rebootless patch deployment that updates interactive user components.

This is simply a function of the way Windows handles file locking, and a bit of planning to ensure no conflicts arise can remove unnecessary reboots in a lot of cases.

Previous: Part 3. Console Applications, Java, Batch Files and Other Red Herrings

Wednesday 9 October 2013

WYCRMS Part 3. Console Applications, Java, Batch Files and Other Red Herrings

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

3. Console Applications, Java, Batch Files and Other Red Herrings

Not to start this off on a downer, but I need to let you know I'll be insulting a few types of people in this post. I also need to make one things extra-clear: I hate Java.

The idea is great: write code once and run it on any platform without recompilation. Apart from the hideously long time it took for Java to come to 64-bit Linux, support is pretty good too. Sun (then Oracle) have been responsive in fixing bugs, but being a platform it is somewhat more difficult to roll out updates so huge amounts of obsolete JRE deployments are available for nefarious types and buggy software to run amuck. The reason I hate it is twofold: it allows developers to be lazy about memory management and rely on automatic garbage collection, and the fact that almost every application I've come across (except Apache Tomcat-based WARs) explicitly constrain support to certain platforms. This is not what I was promised.

When someone talks about "Windows running out of handles", "memory leaks", "stale buffers" or any number of technical-sounding pseudo-buzzphrases they almost always are trying to describe a software malfunction that appears as a Windows failure, or are simply too lazy to investigate and realise it almost invariably is caused by lazy programming. Java does this, but I don't blame Java, I blame Java programmers. The opinion is rife that Windows get less stable the longer Java applications run and that reboots are a Good Thing™. If someone genuinely believes that server stability can be impacted by poor software, but not report it to the vendor, I will inform that person he/she is lazy.

As I mentioned in Part 1, Windows engineers seem to scale their experience of Windows at home to their professional roles, and I've seen developers do the same. Windows doesn't do pipes very well, or they are language- or IDE-specific Outputting to the Event Log is slightly arcane and in fact requires compilation of a DLL to make most output meaningful. It's rarely used outside Microsoft themselves. So developers rely on consoles for display of meaningful output.

These consoles then become part of the deployment practice, perhaps wrapped in a batch file. If your program relies on a console window (and therefore a logged-in, interactive user session) or worse requires me to edit a batch file to apply configuration changes (as opposed to, say, a settings file parsed by said batch file), your software is nowhere near mature enough to be deployed on a system I would expect people to depend on. As a programmer, I question your maturity too.

It's people and organisations like that who typically have one response to issues that crop up: install the latest Service Pack, maybe the latest Windows Updates too (that fixes everything, right?), and if all else fails upgrade to the Latest Version of our software - don't worry that it's got a slew of new features that likely have bugs too, they'll be fixed in the next Latest Version. Rinse, repeat.

As a Windows Engineer, your job is to defend the platform from all attackers. That's not just bad folks out there trying to steal credit card numbers and use you as a spam bot, it's also bad-faith actors trying to deflect the blame from their own inadequacy. It's application owners prepared to throw your platform under the bus to hide their poor procurement and evaluation standards. It's users who saw a benefit in a reboot once and think it's a panacea.

It is in everyone's interest to call people out when they fail to deal with this stuff properly, or you'll quickly find yourself supporting a collection of workarounds instead of a server platform.

Previous: Part 2. Windows Just Isn't That Stable

Tuesday 8 October 2013

WYCRMS Part 2. Windows Just Isn't That Stable

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

2. Windows Just Isn't That Stable

Ah BlueScreen of Death, how I've missed you. Actually, I haven't, since finding out what caused them was a nightmare, and recovering without a remote console solution is not conducive to a predictable social life (or sleep schedule). That said, they were so common we even had joke screen savers mimicking them for our own geekish amusement. Since Microsoft acquired Sysinternals they're even available to download directly from Microsoft. Imagine your in-car entertainment system being configured to show you fake warnings of a failed brake line, or a cracked cylinder head. "Would you like the free video package of Ford vehicles endangering passengers' lives with your new Focus sir?". IT people are weird.

I've analysed my Windows 7 x64 installation, and in the last three years I've had six bluescreens. Once was my graphics card (pretty unique), all the others were my Bluetooth headphones putting my cheapo-Bluetooth dongle in a spin. I blame the dongle, not Windows.

OK, that's not fair to the dongle maker: I blame Windows, but only the Bluetooth stack since it's never been something I expect Windows to do well - multiple dongle-headphone combinations have yet to produce a pleasant experience (three dongles, two headphone models). The network card, storage stack, print drivers, memory management, process scheduler (NUMA-aware these days apparently): These all work so well I haven't notice them doing their job, and I am very familiar with what a complex job they have.

I expect roughly once a month to see a BSoD on public transport, or at stations, or many airports, or billboards. The layout of the BSoD has changed over the years, with each version of Windows getting a little tweak so that you can spot the version even if the error itself is gibberish, and I conclude from viewing these blue non-advertisements: These systems tend to be A) old, B) written in languages and coding styles that aren't that good, and C) interface with devices with terrible drivers.

This is not typical of modern Windows servers.

I would never dream of subjecting a server to the amount of change my hard-working personal workstation endures. AMD updates my video drivers multiple times a year, I attach and detach USB/phone/iSCSI devices more often than I refill my car's tank, and run code from pretty much anywhere as long as it promises me utility or entertainment. A server is different, running things I trust to go on processing without attendance, cleaning up after itself, and basically staying up. If I do make changes, it's controlled, tested and left the hell alone.

Windows Server is solid, and every iteration gets more solid. It's expanding to 64-bit spaces, handling multipath-iSCSI with ease, more cores than I have fingers in byzantine NUMA layouts, hosting server instances in their own right with Hyper-V and pushing gigabytes around through network cards and storage interfaces, crunching data and most importantly providing services.

Yet the very people who spend time and money proving they are skilled in designing and administering these systems so that they can adorn their signatures and office receptions with impressive Microsoft-approved decals are the first to tell you not to trust a given server (without even knowing the workload or configuration) to remain available. They express surprise and concern on viewing a server continuously running for over a year.

I'm surprised and, yes, concerned that they react this way. Isn't this what your sales folks promised me in the first place?

Previous: Part 1: But I Have to Reboot My Own Windows System All the Time!

Monday 7 October 2013

WYCRMS Part 1: But I Have to Reboot My Own Windows System All the Time!

In 1997, a HP 9000 engineer wouldn't blink telling about a server that had been running continuously for over five years.I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

1. But I Have to Reboot My Own Windows System all The Time!

I've mentioned before how Windows makes you lazy. One of the great things about Microsoft Windows as a platform is that software developed on a $500 workstation can be installed on a $50,000 server and probably work without problems. Of course, getting your home-brew software to scale is a different matter, but you get the idea: One platform, different size.

Almost every Windows engineer cuts their teeth on Windows at home, and this informs their experience and expectations of the platform. Like everyone I get tired of the bogging down after a few days/weeks/months uptime and reboot just to clear things up, but that's my fault and not Windows.

I'm lazy.

Typically, I'm running browsers, office suites, anti-virus, any number of games, and install new stuff roughly once a fortnight. Flash, Java and Windows Update are constantly pestering me to reboot after updates. I've even been the one to reinstall completely after a year to see the wonder of a zippy start-up and responsive GUI, only to have it slowly crawl as I add functionality (including those games). Happily, my Windows 7 installation has lasted two years by now with no significant falloff in responsiveness, so that's getting much better, and I only power down/reboot of my own volition when I'm fitting lights and need mains power off - even then it's more likely to be a hibernate.

Servers are not workstations. Any good enterprise has controls for how changes are made to IT systems, and even simple patching requires testing and approved windows to take the system down and update it. In my experience a server will undergo a major overhaul at most twice in its' operational lifetime, and organisations with exceptional controls have zero - new version? New server!

A good server (and I think of Windows Server 2003+ as good servers) will run for decades given quality power and no moving parts. Of course hardware fails, but Microsoft have put in man-decades to get Windows to handle routine changes without downtime. I remember Windows NT 4.0 needing a reboot for an additional IP address. Modern versions of Windows can hot-plug an entire NIC (physically) without a blink, though admittedly I've never actually encountered anyone who uses the facility.

If an engineer merely mentions that, in their experience, Windows needs rebooting I question their experience. I mean it: I question their experience!

Windows is solid, and I can recall only one confirmed bug where Windows will fail (actually, begin to fail, an outage is not a certainy) for the simple factor of running continuously for a given time. When someone speaks of a memory leak that has caused Windows to run out of (insert wooly term here), again I question their experience and the quality of the software/vendor driver code. I've stopped blaming Microsoft.

When I run my applications on Windows Server and, more importantly, when I am paying someone to manage those systems for me, I expect them to have faith in their products and promise me server availability. Rebooting breaks availability.

Previous: Why You Can't Reboot my Server

Why You Can't Reboot My Server

When I was an on-site server engineer in 1997, I stood next to a HP 9000 engineer waiting for a SCSI hard drive at our parts depot, and we got chatting about his next work order: He was off to install a tape drive. I asked him what the new hard drive had to do with it, and he mentioned that the server in question had been running continuously for over seven years, and at least one drive was likely to get stuck and refuse to spin again once he turned the frame back on.

I found this remarkable at the time, and couldn't imagine a Windows server lasting that long. I have moved on, and frankly expect my Windows servers to survive that long today. Very few share this position, and I'm trying to find out why it's so lonely on this side of the fence.

In this series of posts, I'll be looking at the most common complaints from Windows engineers and administrators they feel are adequate to justify rebooting servers, either as (or instead of) a diagnostic step, on a schedule that can best be described as arbitrary, or even artificially to apply fixes for problems the system doesn't have.

In this series:
Part 1: But I Have to Reboot My Own Windows System All the Time!
Part 2. Windows Just Isn't That Stable
Part 3. Console Applications, Java, Batch Files and Other Red Herrings 
Part 4. Windows Updates and File Locking 
Part 5. Nobody Ever Runs a Server That Long
Part 6. It's OK, the Resilient Partner Can Take Over
Part 7. I Don't Think You Understand What A Server Is


How to Make Your Customers Feel Like Meat in a Tube

Few things annoy me more than web-based forms for initiating customer contact. My experience of them ranges from poor to dismal, and even when I point out that I expect companies to fail in their response I am rarely surprised by brilliance (or even adequacy).

The first problem with these forms is actually the result: Your enquiry ends up not as an e-mail for a person, but a record in a database. Some forms are worse than others in betraying this, but if you even have to select your company size or decision-making company role you can be sure you're being slotted into a Customer Spamming Service machine.

From there, around four out of five responses make no reference to your original query. Unlike e-mail, where you can save your initial contact in your Sent folder, and typically hiting Reply generates a new mail on top of your original one, the first response you receive almost always has no history, so you're left scratching your head wondering if you really forgot to mention your product's model number, even when you remember having to look up the unicode for the unnecessarily accented é in the product name.Whether a human typed out your reply, selected a form response or some machine logic matched your keywords to information already available in the FAQ, I will offer odds, without knowing who the company is, that the original question is not included for reference.

It's a pain to fill in forms like these repeatedly for each individual question, so you might be tempted to put more than one question in your query. Beware traveller - the company will choose which answer most closely matches their prepared form responses and send that to you, regardless of the amount of prominence you try to give to the one you really need answered first.

Errors on the form? How about not residing in the US so skipping the "state" field, only to be told the field is mandatory. OK, I live in Wyoming, Netherlands. Ah, the form now tells me having a state filled in outside the US in an invalid choice? Check the dropdown - yep, only US states available and no way to not pick one. Don't bother complaining about the logic in your actual request - you see, the people choosing stock responses to send that don't adequately deal with your query, they're in no way connected with the end of the sausage maker that ruins your customer contact experience from the start. They just turn the handle.

While sending an enquiry to a prominent software vendor, I happened to have NoScript turned on and found the form broken beyond use. This is simply not justifiable. Oh well, I'll enable the site for JS, but lo! The form fails to complete again. This time it is because a piece of code from a marketing firm has not arrived. So prominent, it even has the name market in its' name - answering my question vs completing my digital profile for a third party: Which do you think they care about most?

All this from an IT Security company, that sells products to control mobile phone policies to stop users from doing things like installing untrusted software that sends their data to unknown parties, without telling you.

Why am I running a marketing company's JavaScript to collect my personal information to initiate an evaluation of your products?

I am just meat in a sausage to you people, aren't I?