The Web and its 27 year journey to become a critical part of modern life

Interviewing ESET’s experts about the Web’s journey so far – part 1

What has the journey of the World Wide Web been like so far, as seen and experienced by ESET’s security folk? ESET Senior Research Fellow David Harley provides his take in the first installment of our series of interviews marking the Web’s 27th birthday

What has the journey of the World Wide Web been like so far, as seen and experienced by ESET’s security folk? ESET Senior Research Fellow David Harley provides his take in the first installment of our series of interviews marking the Web’s 27th birthday

On August 6, 1991, English computer scientist Tim Berners-Lee publicly announced the World Wide Web project, which effectively marked the birth of the Web. Twenty-seven is a respectable age and, while it isn’t usually considered a milestone birthday in general terms, this is a good occasion for some retrospection and for asking a few questions.

How did Berners-Lee’s brainchild deal with the teething problems of its early age? Did it get out of hand in adolescence as so many of us did? And since it’s reached adulthood and become an intrinsic part of our daily lives, has it actually achieved maturity? Does this analogy to human life make any sense at all? Um, this is a question you’ll need to answer for yourself.

Either way – and just as importantly (and not only for security folks) – are web and browser security keeping up with the increasingly insidious dangers lurking on the web?

To mark this event we sat down with three of our experts for a series of interviews and we’re starting today with ESET Senior Research Fellow David Harley.

What were you up to on August 6, 1991?

For some reason, Tim Berners-Lee didn’t let me know his plans personally, so it didn’t make my diary. But I do know approximately what I was doing. We were a few days away from the Eleventh International Workshop on Human Gene Mapping, which ran August 18-22 in London in 1991. I did much of the administration for HGM11 (and for HGM10.5 the previous year) but by that time I’d have been mostly occupied setting up a bunch of PCs we’d hired. Apart from setting up required applications (and a rapid-fire de-installation routine so that the machines could be returned in out-of-the-box condition as soon as possible after the event), I was also responsible for setting up antivirus software. The scanner we were using was a simple on-demand scanner, as tended to be the case back then, so I also put together some routines (primitive by modern standards) for scheduled scanning and integrity checking, and while I didn’t go as far as writing an on-access scanner, I did force an on-demand scan of executables before they were run. Of course, by the time I joined the security industry in 2006, coding for security products was much, much more complicated.

How have the Web and the Internet as a whole changed over the past 27 years?

In the early 1990s, most people only had corporate email, if they had email at all. Most organizations in the UK didn’t have their own internet feed (Imperial Cancer Research Fund, where I was working, was one of the first). And while I did have email by then, I had to log directly onto a VMS server to access it. Most apps were text-based unless you had a Mac or something like a Sun workstation (yes, I used both, some of the time…) but even then apps like telnet or kermit often just emulated the dumb terminals that many people were still using. There was graphical stuff around, even for MS-DOS, but it tended to be quite specialized – graphics packages, DTP, statistical software and so on. As Windows evolved into a real graphical interface, that accelerated the change to mouse-driven apps, so you started to get graphical versions of utilities like ftp. And, come to that, browsers.

I’d say the big change in Web usage, as in local applications, has been less about the services provided than in the way that modern operating systems have standardized graphical and networking interfaces and moved away from command lines, at least for the everyday user. If you weren’t heavily involved with systems administration and user support at that time, it’s harder now to realize just how many issues there were with client/server interfacing, incompatible hardware, operating systems, data formats and applications: how many people nowadays have to mess around with jumper settings on motherboards or network cards or hard disks, or with drivers for individual devices?

The general trend towards maximization of ‘user-friendliness’, the reduced cost of hardware (I remember in 1990 that we paid something like £1,200 for a 4MB RAM upgrade!) and the ongoing trend towards more computing power in smaller packages has been a major factor in the way we use the Web and in the functionality available. When I bought my first mobile phone in the mid-90s it didn’t even do SMS, let alone access web pages… Now, anyone who can afford a smartphone and contract can get the sort of functionality (and more) that few people had on their home machines in the early 90s. When I built my first web pages, I coded them by hand. Nowadays, you can build sites using templates supplied by the site provider that have more features and require little or no knowledge of HTML, let alone all the other coding that underlies a complex site. A few years ago I replicated someone else’s complex, multi-page web-site in less than two days, using my favorite CMS.

In other words, anyone can be a content provider rather than just a content consumer. 

Did you expect the Web to revolutionize so many aspects of our lives?

I’m not sure I thought about it in those terms at that time. It’s not as though Berners-Lee said “Let there be connectivity” and suddenly there was all this stuff possible that we couldn’t do before. In some ways, the importance of the Web was that it rationalized many processes that already existed, and you didn’t have to use the Web or a Web browser to access online information. (You still don’t, but it’s awfully convenient.) At the time when it was actually still possible to have visited every site on the Web, I was still getting more use out of Gopher – which is still around, by the way, though I don’t imagine many people use it. In fact, right up to 2001 I was still using a text-based browser – lynx – as a tool for providing a network-based security information resource. But as I’ve suggested above, the skill level required to have a visible presence on the Web has decreased dramatically. However, that’s a mixed blessing at best, in terms of aesthetics, security, and general social interaction.

Let us now zero in on cybersecurity. How has the transition from the read-only (inert, one-directional …) Web 1.0 to the participatory (interactive, social …) Web 2.0 influenced security?

Well, the Internet during the Web 1.0 era was never entirely one-directional. Anyone with a little money and knowledge could set up their own web page, and while the Web was essentially seen as another tool for searching/browsing hypertext links, there were plenty of other tools that offered a more interactive experience – Usenet, for example, which had been around for over a decade, and which was a good source for security discussion and information, but was also liable to all sorts of misuse and abuse by virus writers and other miscreants. But Web 2.0 widened the scope of the web to include the interactive and social advantages of these tools as well as usurping the more generalist/consumer-oriented functionality of portals like AOL, which tended to include such services as forums and messaging services as well as (sometimes) direct Internet access.

In addition, of course, the interactive Web developed other features such as freely available blogging resources, social media like Myspace and Facebook, and telephony-oriented features such as Twitter and Instagram. These made it much easier for individuals to have their own voice and channels of self-expression, of course, but also widened the attack surface immeasurably, with a dramatic increase in the number of ways and places wherein an innocent user of the Internet could come to harm.

I’m not about to claim that the Web is a safer place than it used to be: anything but. However, the expansion of the web has certainly had an influence on security. Earlier on, a suitably cautious and well-informed user of the Internet might have been able to get away with not using security software, as long as they were careful about checking the sources of emails, not opening potentially unsafe attachments, staying away from dubious forums and web sites, and so on. Today… well, personally I wouldn’t venture online from a PC without a decent security suite – antivirus alone is better than nothing, but not enough. Even those zealots who claim that antivirus is dead have abandoned the claims that all you need is commonsense: instead, they tend to suggest a range of alternative solutions (not all of them effective and/or user-friendly). It’s also reached a point where smartphone users – especially Android users – also need to consider their options carefully as far as security software is concerned, as well as where they go and whether it’s safe to click.

On a related note, how has the security of websites and web applications evolved over the years?

Security isn’t all about end users protecting themselves from malware (and phishing, social engineering, fake news and other stuff against which security software is less effective), but also about service providers of one sort or another taking responsibility for the safety of their customers. The major financial institutions have long taken phishing-related issues seriously, but other major sites (social media, retail, auction sites, application providers, news portals, search engines) are at least far more aware than they were of the potential impact when their sites – or, perhaps worse, their customer databases – are compromised, even though the sheer volume of such compromises (leakage of credentials, DDoS, cross-site scripting, drive-by downloads) is terrifying. But the Big Names are more prepared take some responsibility and remedial measures, even if their PR content is sometimes more about lip service than willingness to subvert their own financial models by implementing the strongest possible security. Nevertheless, there’s much more to be done, while IoT and IIoT (Industrial Internet of Things) providers have barely begun to recognize the risks, let alone take responsibility for them.

Browser-based attacks are effective and popular. What are the main security issues for web browsers?

How long have we got????

  • In-browser storage of site credentials
  • Privilege escalation by modification of memory space
  • Tracking of user activity across sites by legitimate and less legitimate sites
  • Unsafe extensions and plugins
  • In-browser cryptomining
  • DNS hijacking and spoofing
  • Malicious redirects
  • Shortened URLs
  • SEO issues
  • Etcetera…

What would make browsers less inviting targets and/or conduits for attacks?

More security and less user-(over)friendliness.

What are your recommendations for making one’s browsing more secure?

Don’t prioritize convenience over security. Don’t, for instance, store credentials unnecessarily in the browser in order to save 10 seconds of typing. Do your browsing from an unprivileged account. Be very selective about what plug-ins you use. Don’t make your security settings too low. Treat all those GDPR pop-ups with caution: many of them are just CYA-compliant, they’re not there to enhance your privacy.

Since many common attacks at web users, such as “in-the-middle” attacks, have to do with faulty authentication, what would make user authentication safer?

The Holy Grail: convenience that doesn’t compromise security. That would be a loooonnnng article in itself. I’ll save that for another time…

How will the ongoing strong drive for universal HTTPS adoption as led by Google, Mozilla and others help?

Less than the companies concerned would like us to think. Certainly while we fail to educate the end user on the limitations of that approach.

What are your expectations for the Web, say, 10 years from now?

“Don’t make predictions about computing that can be checked in your lifetime” – Daniel Delbert McCracken. Not that I’m counting on living that long…

How about the place of security in the Web 3.0 with its many monikers such as omnipresent, thinking, semantic, the web of data, etc.?

I’m not sure those terms are really synonymous. For instance, Web 3.0 is often considered as essentially a means of smarter searching by getting context from the consumer. The Berners-Lee vision of the semantic web is more about enhanced relevance in data seeking and sharing through machine analysis. But machine intelligence/AI has a long way to go before it escapes from the limitations of human programming. What could go wrong? Well, take a look at Facebook’s algorithms – or at least at their effectiveness in practice, since FB tends to be secretive about how they work – and tremble. As for Web 3.0 security, I think we need a more precise formulation of what Web 3.0 will actually be before we consider how best to maintain it securely.

The late Stephen Hawking once stated, “the development of full artificial intelligence could spell the end of the human race”. Full AI is years off, but do you agree that the situation could be so grim?

See my comments on Web 3.0.

How can we prepare for the Web’s future and for security risks that emerging technologies entail? Are we “swimming in the right direction” at all?

Some, perhaps. Some are trying to paddle where a forward crawl is needed. And many are simply throwing the user a leaky flotation device.


This is the first article in our series of interviews with our experts. You can read part two in the series here.