Case Study: The Myth of Free Wi-Fi

Case Study
Case Study: The Myth of Free Wi-Fi

Invisible and insecure infrastructure is facilitating data exploitation

Many technologies, including those that are critical to our day-to-day lives do not protect our privacy or security. One reason for this is that the standards which govern our modern internet infrastructure do not prioritise security which is imperative to protect privacy.

 

What happened?

An example of this is Wi-Fi, which is now on its sixth major revision (802.11ad). Wi-Fi was always designed to be a verbose in the way it signals to other Wi-Fi devices, however what made connectivity realisable in the early days has become a hindrance in our connected world.

Wi-Fi infrastructure was built to prioritise responsiveness and connectivity, rather than ensuring that users are able to communicate privately. And overall, people have not minded. The expectation of consumers is for Wi-Fi technology is that it just works and works quickly. To facilitate quick connectivity, many devices broadcast unique signals openly on the network in order to locate Wi-Fi access points. The information broadcast publically includes the names of other Wi-Fi networks the device has connected with before. This wouldn’t be a problem if access points were gibberish, but access points tend to reveal personal information, since network and device names are often highly descriptive, from John’s iPhone to Starbucks, Vodafone_Staff or 62 Britton St. As a result, with one broadcast from John’s iPad alone, we can know a large amount of information, all of which can be highly valuable for advertisers, social engineers, or nefarious hackers.

 

What’s the problem?

The insecurity of Wi-Fi is partially a result of how technology has expanded, some of which was unforeseen by its creators. However, the fact that technologies that are fundamental to our modern infrastructure are insecure, is not limited to old technologies. Despite the passage of time similar mistakes continue to be made. A good example would be the recent intervention by the government of the United Kingdom in the rollout of smart meters.

In 2016, British signals intelligence agency GCHQ took the unprecedented step of halting the deployment of smart-metered devices when it discovered they shared the same decryption key for deployment. Just as smart-meters were in the process of being rolled out to 53 million homes. A Whitehall official warned that someone breaching the key could start blowing things up.

As Dr Ian Levy, now head of the United Kingdom National Cyber-Security Center said later: “The guys making the meters are really good at making the meters, but they might not know a lot about making them secure. The guys making head-end systems know a lot about making them secure, but not about what vulnerabilities might be built into them”. This statement typifies the problem with the disconnect between manufacturers, vendors, implementers, and customers. A gap which can only be bridged with independent standardisation and regulations.

As both Wi-Fi, as well as the more recent case of smart meter standards in the UK illustrate, security and privacy is often an afterthought or bolt on, rather than built in by design. Such issues may be a result of how technology has expanded, some of which was unforeseen by the creators of the technologies fundamental to our modern infrastructure. Regardless of its cause, our devices, networks and infrastructures should be designed in a way that protects people

 

What’s the solution?

It is critical that civil society engage with regulators and standardisation bodies to produce technology that provides strong security, and thereby strong privacy, for users. Civil society and industry must also do a much more coordinated effort in education the public of the virtues of failing well. By this we mean that when security risks arise that the user is notified in a way which is understandable to them and where the danger is severe the product should fail rather than continuing in an unsafe state. Individuals understand the importance of fuses on their electronics, however the implementation of fuses in software remains mostly the purview of the technically literate and not of the masses. When devices and services are used they should minimise the cone of data they create. By this we mean that the verbosity of communications and logging should be at minimal during production use, superfluous data and metadata should be discarded, and where possible communication should be as privacy preserving as possible by not being uniquely identifiable or creating profile-able traits.

Privacy International has begun to engage with standards bodies to open a dialogue about ensuring future standards are able to handle the ongoing boom of connectivity in a security and privacy preserving way. Privacy International has forthcoming Data Exploitation Principles that it believes will be a set of guidelines standards bodies and implementers should follow when developing new technologies. Specifically Privacy International believes that technology such as devices, networks, services and platforms should not betray, or be capable of betraying, their users and that such items should not leak data.