Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Monday, April 26, 2021

On the invisibility of infrastructure

Infrastructure is boring, expensive, and usually someone else's responsibility/problem. Which is perhaps how the UK finds itself at what Jeremy Fleming, head of GCHQ, describes as a moment of reckoning. Simon Wardley analyses this in terms of digital sovereignty.

Digital sovereignty is all about us (as a collective) deciding which parts of this competitive space that we want to own, compete, defend, dominate and represent our values and our behaviours in. It's all about where are our borders in this space. ... Our responses all seem to include a slide into protectionism with claims that we need to build our own cloud industries.

Fleming is particularly focused on "the growing challenge from China", expresses concern about UK potentially losing control of  "standards that shape our technology environment" which apparently "make sure that our liberal Western democratic views are baked into our technology". Whatever that means. Fleming's technological examples include digital currency and smart cities.

Fleming talks about the threats from Russia and China, and regards China's potential control of the underlying infrastructure as more fundamentally challenging than potential attacks from Russia as well as non-state actors.

Fleming notes the following characteristics of those he labels adversaries:

  • Potential to control the global operating system.
  • Early implementors of many of the emerging technologies that are changing the digital environment.
  • Bringing all elements of [...] power to control, influence, design and dominate markets. Often with the effect of pushing out smaller players and reducing innovation. 
  • Concerted campaigns to dominate international standards.

And continues

If [any of this] turns out to be insecure or broken or undemocratic, everyone is going to be facing a very difficult future.

It would be easy to hear these remarks as referring solely to China. But he also sounds a warning about corporate power, acknowledging that their commercial interests sometimes (!?) don't align with the interests of ordinary citizens. And with that in mind, it's easy to see how some of the adversarial characteristics listed above would apply equally to some of the Western tech giants.

If the goal is to bake Western values (whatever they are) into our technology infrastructure, it is not obvious that the Western tech giants can be trusted to do this. Smart City initiatives associated with Google's Sidewalk Labs have been cancelled in Portland and Toronto, following (although perhaps not entirely as a consequence of) democratic concerns about surveillance capitalism. However, Sidewalk Labs appears to be still active in a number of smaller smart city initiatives, as are Amazon Web Services, IBM and other major technology firms.

Fleming talks about standards, but at the same time he acknowledges that standards alone are too slow-changing and too weak to keep the adversaries at bay. "The nature of cyberspace makes the rules and standards more open to abuse." He talks about evolutionary change, using a version of Leon Megginson's formulation of natural selection: "it's those that are most able to adjust that prosper". (See my post on Arguments from Nature). But that very formulation seems to throw the initiative over to those tech firms that preach moving fast and breaking things. Can we therefore complain if our infrastructure is insecure, broken, and above all undemocratic?


For most of us, most of the time, infrastructure needs to be just there, taken for granted, ready to hand. Organizations providing these services are often established as monopolies, or turn into de facto monopolies, controlled not only (if at all) by market forces but by democratically accountable regulators and/or by technocratic specialists. However, the Western tech giants devote significant resources to lobbying against external regulation, resisting democratic control. And Smart City initiatives typically embed much the same values everywhere (civic paternalism, biopower).

So here is Fleming's dilemma. If you don't want China to make the running on smart cities, you have to forge alliances with other imperfectly trusted players, whose values are sometimes (!?) not aligned with yours. This moves away from the kind of positional strategy described in Wardley's maps, towards a more relational strategy.

 


Gordon Corera, GCHQ chief warns of tech 'moment of reckoning' (BBC News, 23 April 2021) via @sukhigill and @swardley

Jeremy Fleming, A world of possibilities: Leading the way in cyber and technology (Vincent Briscoe Lecture @ Imperial College, 23 April 2021) via YouTube.

Susan Leigh Star and Karen Ruhleder, Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces (Information Systems Research 7/1, March 1996)

Simon Wardley, Digital Sovereignty (22 October 2020)

Related posts: The Allure of the Smart City (April 2021)

Sunday, March 18, 2018

Security is downstream from strategy

Following @carolecadwalla's latest revelations about the misuse of personal data involving Facebook, she gets a response from Alex Stamos, Facebook's Chief Security Officer.

So let's take a look at some of his hand-wringing Tweets.

I'm sure many security professionals would sympathize with this. Nobody listens to me. Strategy and innovation surge ahead, and security is always an afterthought.

According to his Linked-In entry, Stamos joined Facebook in June 2015. Before that he had been Chief Security Officer at Yahoo!, which suffered a major breach under his watch in late 2014, affecting over 500 million user accounts. So perhaps a mere 50 million Facebook users having their data used for nefarious purposes doesn't really count as much of a breach in his book.

In a series of tweets he later deleted, Stamos argued that the whole problem was caused by the use of an API that everyone should have known about, because it was well-documented. As if his job was only to control the undocumented stuff.
Or as Andrew Keane Woods glosses the matter, "Don’t worry everyone, Cambridge Analytica didn’t steal the data; we were giving it out". By Monday night, Stamos had resigned.

In one of her articles, Carole Cadwalladr quotes the Breitbart doctrine
"politics is downstream from culture, so to change politics you need to change culture"
And culture eats strategy. And security is downstream from everything else. So much then for "by design and by default".







Carole Cadwalladr ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower (Observer, 18 Mar 2018) via @BiellaColeman

Carole Cadwalladr and Emma Graham-Harrison, How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool (Guardian, 17 Mar 2018)

Jessica Elgot and Alex Hern, No 10 'very concerned' over Facebook data breach by Cambridge Analytica (Guardian, 19 Mar 2018)

Hannes Grassegger and Mikael Krogerus, The Data That Turned the World Upside Down (Motherboard, 28 Jan 2017) via @BiellaColeman

Justin Hendrix, Follow-Up Questions For Facebook, Cambridge Analytica and Trump Campaign on Massive Breach (Just Security, 17 March 2018)

Casey Johnston, Cambridge Analytica's leak shouldn't surprise you, but it should scare you (The Outline, 19 March 2018)

Nicole Perlroth, Sheera Frenkel and Scott Shanemarch, Facebook Exit Hints at Dissent on Handling of Russian Trolls (New York Times, 19 March 2018)

Mattathias Schwartz, Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate (The Intercept, 30 March 2017)

Andrew Keane Woods, The Cambridge Analytica-Facebook Debacle: A Legal Primer (Lawfare, 20 March 2018) via BoingBoing


Wikipedia: Yahoo data breaches

Related post: Making the World more Open and Connected (March 2018), Ethical Communication in a Digital Age (November 2018), The Future of Political Campaigning (November 2018)


Updated 20 March 2018 with new developments and additional commentary

Wednesday, May 26, 2010

Every anecdote tells another story

@glynmoody picks up a #securitytheatre story from Bruce Schneier's blog, If You See Something, Think Twice About Saying Something (May 2010).

It seems someone got arrested for reporting a suspicious package. Bruce seizes on this as evidence that the security regime is stupid - both the rules and the people executing the rule - and Glyn says "we need more cases like this".

However, as @Foomandoonian points out (based on further information posted in the comments below Bruce's blog), the original news story that prompted Bruce's scorn omitted a crucial detail - an alleged identity between the person reporting the suspicious package and the person leaving it there in the first place. Glyn replies "sure, but I'm interested in the larger point, not the *facts*..."

So we appear to have a bit of face-saving and jumping-to-conclusions here. Either the police are unfairly accusing this gentleman of having deliberately made a false report, or Bruce and Glyn are unfairly pinning the tail on the wrong donkey this time.

Bruce is well-known for his criticism of security theatre, and his blog contains numerous examples of the theatre of the absurd. A few years ago, in my post Intelligence or Fear? I used one of his examples to illustrate intelligence and stupidity, in that instance preferring Bruce's interpretation of events to that of the Australian Prime Minister.

Of course it is tempting to draw attention to any incident that seems to confirm one's strongly-held position about something or other.  I've probably done this myself from time to time. But it's not so good if you just find yourself misreading the facts to suit your prejudices.

Friday, January 8, 2010

Ice Nine

by Richard and Aidan


Earlier this week, Rachel was on her way to New Zealand via Heathrow. Here's how the interaction of several systems failed her.

1. Thanks to the latest security scare, it now takes two and a half hours to search all the handbaggage and get all the passengers onto the plane.

2. By which time the plane has frozen, and needs de-icing again. That takes another hour.

3. By which time the pilot and co-pilot have already spent so much time sitting on the plane that they no longer have enough flying hours remaining in this shift to take the plane to its destination. So the flight is cancelled.

4. The passengers are asked to return to the baggage hall, collect their checked-in baggage and start the process all over again. But there are many other flights that have been cancelled for similar reasons, and the baggage hall is already full-to-bursting with unloaded bags and frustrated passengers, so Rachel has to wait several hours before her unloaded bags appear on the carousel.

5. Then she has to queue to get onto the next available flight, and the process starts all over again.

By a happy fluke, the next plane Rachel boarded actually managed to take off, and she was on her way to New Zealand, but not before a last-minute search to find enough qualifying aircrew ...


Why does this kind of mess occur? Anyone can look at the whole system and see what could have been done differently. But each system is operated by a different organization, and there is a lack of trust and overall systems leadership.

As readers of Kurt Vonnegut's novel Cat's Cradle will recognize, Ice Nine was the name of a fictional crystal that was capable of bringing the whole world to a complete stand-still. Quite an apt metaphor for failed systems then.

Friday, June 9, 2006

A reasonable percentage (3)

One piece of intelligence was accurate.
A man described as Abu Musab al-Zarqawi's "spiritual adviser" inadvertently led US forces to the spot where the militant leader was finally located and killed, the US military says.

Major General William Caldwell said the operation to track down the most wanted man in Iraq was carried out over many weeks, before he was killed after two US air force F-16s bombed a house in a village north of Baghdad.

"The strike last night did not occur in a 24-hour period. It truly was a very long, painstaking deliberate exploitation of intelligence, information gathering, human sources, electronic, signal intelligence that was done over a period of time - many, many weeks," Gen Caldwell said on Thursday.


One piece of intelligence was flawed.
Anti-terror police raided a house at Forest Gate last week after saying they received "specific intelligence" that a chemical device might be found there.

Scotland Yard later said they had "no choice" but to act while the prime minister said it was essential officers took action if they received "reasonable" intelligence suggesting a terror attack.

Tony Blair said he backed the police and security services 101% and he refused to be drawn on suggestions that the armed operation had been a failure.


It's a reasonable percentage. (Previous posts: April 9th, April 18th.)

But that's part of the problem with intelligence - it delivers probability rather than certainty. Perhaps the outcomes are the right way around this time - the presumed-guilty man was killed, and the presumed-innocent man merely injured. (So we shouldn't complain, should we? Imagine the complaints if it had been the other way around!)

But over the long run, are there too many errors? (Difficult to tell, as we only know of some of the better publicized successes and failures.) Should we be uneasy about the errors of intelligence, and the consequences of acting upon erroneous intelligence? There are fundamental questions here about the relationship between knowledge (or ignorance) and action (or inaction).

Tuesday, April 18, 2006

A reasonable percentage (2)

"It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns, and 60 percent of (fake) bombs. And recently, testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts."
[Bruce Schneier]

If security finds nearly half the bombs, does that count as success or failure? (Glass half-full or half-empty?) Schneier reckons it's probably good enough, and points out: "Against the professionals, we're just trying to add enough uncertainty into the system that they'll choose other targets instead."


"Do not despair; one of the thieves was saved. Do not presume; one of the thieves was damned."
[Saint Augustine]

"One of the thieves was saved. (Pause.) It's a reasonable percentage."

Wednesday, December 21, 2005

Security Orientation

Adam Shostack identifies Three Views of Software Security, which he calls orientations. So I wondered whether these could be mapped onto the four types of trust and mistrust, and whether that reveals a fourth orientation. But the mappings turned out to be a little more complex.

Orientation
Focus
Typical assessment
Type of Trust
Government
Assurance of quality, reliability, safety, and appropriateness for use
Commercial security products aren't good enough to be used. We are losing the security war.
Authority+Network: We are not getting adequate assurances of security - neither from centralized guarantors, or from the emergent power of the network.
Hacking
Tools and techniques of exploration and exploitation at the micro and macro levels
Unwilling to confer a positive evaluation on any product or technology vendor (especially Microsoft).
Commodity+Authentic: We hackers can usually engage more deeply with the product than the vendors themselves.
Economic
People are behaving rationally, if only we can understand their motivations
Few people ask whether products are secure, so there is little explicit demand for security.
Commodity+Network: Security (or its lack) emerges from the combined behaviour of rational actors.

There are several other possible permutations, but the orientation I want to encourage is based on Network+Authentic - combining a deep engagement with the (focal) practices of technical security with a broad and dynamic social base (process-driven, community-driven). Next question: how can we foster this orientation?

Thursday, October 27, 2005

WiFi in New Orleans (Katrina)

Ernie the Attorney has returned to his home in New Orleans to find the communications infrastructure is down and out. In an interesting echo of his earlier post What've got here is a failure to communicate (which discussed the signal/noise ratio in email and blogging - see my comment in the POSIWID blog) he now blogs about a much more serious problem What we have here is an inability to communicate.

Basically, the communications infrastructure in New Orleans is a mess. Ernie is one of the lucky ones, and he is sharing his good fortune with others.
I'm glad to say that my home is one of the few that has high-speed internet. ... I have a Wi-Fi set-up for my internet, and it reaches to the front and back of my house. So pretty much anyone who wants to can use it. My friends often come by and sit in the backyard or on the front porch. Some of these cyber-junkies are friends of friends. I wouldn't think of making my Wi-Fi network secure. Too many people need it. [my emphasis]
This is a commendable example of positive network trust. In the absence of top-down provision, people help each other.

But at the same time, there is clearly some serious mistrust of the authorities. It is not clear how the urgent message in Ernie's blog is going to get through to the people who might do something about it. (This is one of the political aspects of the point made in Ernie's earlier post.)

People in New Orleans are learning things it will take them a long time to forget. It will be interesting to see whether these events bring about a permanent shift in the trust geometry of these communities.

Sunday, October 23, 2005

Quarantine 2

A parrot infected with H5N1 dies in quarantine (BBC news). Does this mean that quarantine works, or at least has worked in this instance, to protect us from bird flu?

In a contrasting example, sniffer dogs, returning to the UK from earthquake duty in Algeria (BBC News) and Kashmir (Reuters), are forced to undergo quarantine (making them unavailable for further duties for six months, including protection against terrorism). Does this mean that quarantine is stupid and inflexible? Is this yet another example of interference (poor interoperability) between different security systems?

Even in this case there is a possible justification for quarantine, since it seems reasonable to suppose that a dog operating in a crisis zone such as Kashmir or New Orleans might be at greater risk of disease. The dog is doing dangerous work, and is highly likely to receive scratches and other minor injuries through which germs can enter. And there are more germs, as well as stray dogs, rats and other threats.

Rescue dogs are in a similar position to human medical staff - they are more likely to catch things. Of course they have all the possible jabs, but is that enough? And of course they have plenty of opportunity to transmit things to people who are already weak and are therefore particularly vulnerable to infection. Do you want to be bitten by the dog that pulls you from the rubble?

So there is a complicated risk trade-off calculation going on here. Who is going to do this risk calculation? The existing systems and regulations may produce absurd results, but what is the alternative? The Home Secretary overturns the regulations?!?

Meanwhile, what about young men with strong religious beliefs, who go to foreign countries to provide earthquake assistance. The UK security forces will find it hard to work out which ones have had contact with dangerous influences, and will therefore be obliged to put all of them into some kind of virtual quarantine (e.g. close surveillance) on their return.

How effective, efficient and fair is any given security mechanism, and what are the unwanted side-effects?

See also Quarantine 1 (October 2005)

Saturday, October 1, 2005

Quarantine

Question

Banks are highly aware of some types of threat, but seem to ignore other types of threat. How can you have a secure system in which one party is systematically blind to a particular class of threat. You would have to hold them in some sort of quarantine.

Answer

How can you have a secure system that only works if all the parties are completely free of conceptual limitations?

I think my children are systematically blind to certain things. (No doubt they think I'm systematically blind to certain things.) This means I trust them in certain contexts/situations and not in others.

A guard dog can provide some degree of security, can be involved in a secure system. That remains true despite the fact that dogs are unable to recognize certain classes of threat, and you certainly wouldn't delegate the design of the whole system to the guard dog. Why can't we say the same about a bank?

You put a dog into quarantine because you think it might have rabies, not because you think dogs are stupid. Banks aren't stupid either.

Quarantine may be a useful architectural pattern in certain situations. It protects against delayed attacks - such as a disease with an fixed incubation period, or a software virus. An entity remains in quarantine until it can be properly scanned and disinfected, or until the disease emerges and runs its course, or until the incubation period expires. (For example, a software artefact might be presumed free of a Friday 13th software virus if nothing detectable happens on Friday 13th.)

However, guard dogs need to be contained - for their own safety as well as the safety of others. They must be protected against specific attacks - the burglar who tries to feed them with drugged meat, or to confuse them with extreme smells. When dogs bark their heads off, these reactions need to be properly interpreted. And when a dog doesn't bark in the night, this may provide an important clue to what happened (Sherlock Holmes)

Similarly, banks might need to be contained, and their activity (and inactivity) interpreted. (But I don't think this counts as quarantine.) But whether this is necessary (or even meaningful) depends on the architecture of the whole collaborative system.

See also Quarantine 2 (October 2005)

Thursday, September 8, 2005

Hogwarts Security 2

In my previous post, I suggested that the ineffectual security mechanisms in the Harry Potter books could be read as part of J.K. Rowling's ongoing satire against technology. The books also include a good dose of political satire, regularly presenting the Minister for Magic and his aides in a poor light.

In the Prisoner of Azkaban, Hermione possesses a Time Turner, which allows her to be in two places at once. She and Harry use this device to frustrate the plans of the Ministry of Magic, while retaining a cast-iron alibi. And yet Hermione's possession of the Time Turner had previously been authorized by the Ministry of Magic - presumably by a separate department. Clearly the wizarding world has failed to embrace Joined-Up-Government.

All through the Half-Blood Prince, wizards mock the stupid authentication mechanisms invented by the Ministry of Magic.

"You have not asked me, for instance, what is my favourite flavor of jam, to check that I am indeed Professor Dumbledore and not an imposter, ... although of course, if I were a Death Eater, I would have been sure to research my own jam preferences before impersonating myself."

"I still don't understand why we have to go through that every time you come home. ... I mean, a Death Eater might have forced the answer out of you before impersonating you." "I know, dear, but it's Ministry procedure and I have to set an example."

In my view, Rowling has perfectly captured the kind of bureaucratic panic that causes Government Departments to disseminate such half-baked security schemes.

Into The Machine (updated now with a sensible title and a new URL) is an excellent blog documenting the serial follies of the British Home Office. And here is a great video of the British Home Secretary, singing the benefits of the UK Identity Card scheme. [updated to add] ... and I've just discovered this sequel thanks to Robin Wilton.

Tuesday, December 21, 2004

The empire strikes back

originally posted by John


Sic transit gloria Microsoft, I wrote, tongue only slightly in cheek, thus passes the glory of Microsoft. That fits in quite nicely with Richard’s ‘Can we trust software?’ blog, I thought. Then suddenly, I wasn’t so clever, I wasn’t smiling. I wasn’t typing, wasn’t emailing, wasn’t blogging. In fact I wasn’t doing anything except continually rebooting my machine. I lost count at fifteen times and finally resorted to a rescue disc.


In the same edition of Bruce Schneier’s monthly newsletter that Richard refers to I’d picked up on a piece entitled ‘Safe Personal Computing’. It begins, ‘I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, “Nothing - you're screwed.”


Bruce then explains precisely why we’re screwed, before adding, ‘Browsing: Don't use Microsoft Internet Explorer, period.’ And, ‘I'm stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail.’


‘Right’, I thought, ‘I’ll walk the walk with him.’ After all he’s a top guru on this sort of stuff so if it’s good enough for him, it’s got to be good enough for me. (A touch of commodity trust with a little authority and network aftertaste there, I think.)


A couple of hours later I’m browsing OK with Opera, my new default browser, and trying, with no success, to email with Eudora. Then I started writing that sic transit blog and suddenly all bets were off. Immoveable dialog boxes I’d never dreamed existed flashed threatening error messages across the screen terminating me. Word refused to save anything yet opened page after blank page for me. Nothing would work. Even my virus scanner invoked a terminal Winword error message (Winword!). It was as though the operating system had taken offence and was teaching me exactly who was boss. So, after fifteen-odd reboots and rejigs, I rescued the machine, sent Opera and Eudora back where they came from and restored all Bruce’s ‘never use this’ stuff. And total harmony reigns. Even Winword ignores my virus scanner now.


I’ve never trusted software, to answer Richard’s question, ever since I was in charge of a nominal ledger suite way back in the old ICL1902S days and the FD tried to make me punch the cards to make it give him the numbers he wanted. I still don’t. Even less now I’ve seen the empire striking back.

Wednesday, December 15, 2004

Commodity trust rules

originally posted by John






Cigarette smuggling is big business. It’s global. Almost as big - bigger some maintain - as the legitimate cigarette business. Giant tobacco firms profiteer from it effortlessly.



EU and national government efforts to do something about it have been reported religiously as a favourite cause by Private Eye over the years. So far there’s been little success. Now, the current Eye reveals, things are changing. Though Patricia Hewitt, Trade and Industry Secretary, still refuses to publish her department’s latest report. So there’s no saying whether or not the change is for the better.


Among the quotes, conjecture and background in the Eye piece there’s a snippet from Kenneth Clarke, cigar-smoking deputy chairman of BAT - British American Tobacco, one of the profiteering giants – and former Tory chancellor. ‘We act,’ Clarke says, ‘on the basis that our brands will be available alongside those of our competitors in the smuggled as well as the legitimate market.’


Crooked or straight, good or bad, black market or white, the brand must go on. Where commodity trust is at stake there are no rules.

Tuesday, April 17, 2001

Networks of Trust - Who Betrayed Harry Potter's Parents?

The best-selling children’s novel, Harry Potter and the Prisoner of Azkaban, illustrates several important points about trust.

Harry’s parents are hiding from the Dark Lord ("He Who Must Not Be Named"). James Potter can nominate one friend to guard the secret of his whereabouts, and chooses his strongest and apparently most trustworthy friend – Sirius Black. But for Sirius, being the obvious choice makes him immediately vulnerable to attack: in systems engineering terms, he is a single point of failure. He decides that the Potters’ secret would be better guarded by a less obvious person, and delegates the responsibility to a weaker wizard – Peter Pettigrew – who immediately betrays the Potters.

This is an example of transitive betrayal. It illustrates the following points:
  • The strongest component is the most obvious place to attack – and this makes it vulnerable. A powerful adversary trying to break the system, or to breach its security, may well think it worth investing effort into finding how to break the strongest component. (The system is as weak as its strongest link.)
  • This leads to the Decoy pattern. A highly visible component draws fire, but isn’t really worth attacking. This is like sending an armoured truck out of the front gate containing the sandwiches, while the gold bullion slips quietly out of the back gate in an unmarked, unarmed van. (The system is stronger than its strongest visible link.)
  • However, the Decoy pattern is worthless once the illusion is broken. Strength that depends on secrecy is always vulnerable to leakage. A linear (one-to-one) delegation chain is as strong as its weakest link.
  • To delegate responsibility to weaker components, we need to use the Distributed Delegation pattern – where the system now relies on the concerted strength of all the components working together, rather than being vulnerable to the weakness of each. Parallel (one-to-many) delegation is much stronger than linear delegation.
  • Trust is transitive – whether you like it or not. If you trust a component or service from company X, and this depends on a component or service from company Y, then you are implicitly trusting company Y as well, although you may not even know that company Y exists.


Extract from an article published in the CBDI Journal, April 2001.