Richard Veryard's Blog, page 2

January 22, 2023

Reasoning with the majority - chatGPT

#ThinkingWithTheMajority 

#chatGPT has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar tools.

Virginia Dignum has discovered that it has a fundamental misunderstanding of basic propositional logic. In answer to her question, chatGPT claims that the statement "if the moon is made of cheese then the sun is made of milk" is false, and goes on to argue that "if the premise is false then any implication or conclusion drawn from that premise is also false". In her test, the algorithm persists in what she calls "wrong reasoning".

I can't exactly recall at what point in my education I was introduced to propositional calculus, but I suspect that most people are unfamiliar with it. If Professor Dignum were to ask a hundred people the same question, it is possible that the majority would agree with chatGPT.

In which case, chatGPT counts as what A.A. Milne once classified as a third-rate mind - "thinking with the majority". I have previously placed Google and other Internet services into this category.

Other researchers have tested chatGPT against known logical paradoxes. In one experiment (reported via LinkedIn) it recognizes the Liar Paradox when Epimenides is explicitly mentioned in the question, but apparently not otherwise. No doubt someone will be asking it about the baldness of the present King of France.

One of the concerns expressed about AI-generated text is that it might be used by students to generate coursework assignments. At the present state of the art, although AI-generated text may look plausible it typically lacks coherence and would be unlikely to be awarded a high grade, but it could easily be awarded a pass mark. In any case, I suspect many students produce their essays by following a similar process, grabbing random ideas from the Internet and assembling them into a semi-coherent narrative but not actually doing much real thinking.

There are two issues here for universities and business schools. Firstly whether the use of these services counts as academic dishonesty, similar to using an essay mill, and how this might be detected, given that standard plagiarism detection software won't help much. And secondly whether the possibility of passing a course without demonstrating correct and joined-up reasoning (aka "thinking") represents a systemic failure in the way students are taught and evaluated.

See also

Andrew Jack, AI chatbot’s MBA exam pass poses test for business schools (FT, 21 January 2023) HT @mireillemoret

Gary Marcus, AI's Jurassic Park Moment (CACM, 12 December 2022)

Christian Terwiesch, Would Chat GPT3 Get a Wharton MBA? (Wharton White Paper, 17 January 2023)

Related posts: Thinking with the Majority (March 2009), Thinking with the Majority - a New Twist (May 2021), Satanic Essay Mills (October 2021)

Wikipedia: ChatGPT, Entailment, Liar Paradox, Plagiarism, Propositional calculus 

 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2023 04:00

August 17, 2022

Discipline as a Service

In my post on Ghetto Wifi (June 2010), I mentioned a cafe in East London that provided free coffee, free biscuits and free wifi, and charged customers for the length of time they occupied the table.

A cafe has just opened in Tokyo for writers, which charges people for procrastination. You can't leave until you have completed the writing task you declared when you arrived.


Justin McCurry, No excuses: testing Tokyo’s anti-procrastination cafe (Guardian, 29 April 2022)

Related posts: The Value of Getting Things Done (January 2010), The Value of Time Management (January 2010)

 •  0 comments  •  flag
Share on Twitter
Published on August 17, 2022 04:05

April 20, 2022

Constructing POSIWID

I've just been reading Harish Jose's latest post A Constructivist's View of POSIWID. POSIWID stands for the maxim (THE) Purpose Of (A) System Is What It Does, which was coined by Stafford Beer.

Harish points out that there are many different systems with many different purposes, and the choice depends on the observer. His version of constructivism therefore goes from the observer to the system, and from the system to its purpose. The observer is king or queen, the system is a mental construct of the observer, and the purpose depends on what the observer perceives the system to be doing. This could be called Second-Order Cybernetics.

There is a more radical version of constructivism in which the observer (or perhaps the observation process) is also constructed. This could be called Third-Order Cybernetics.

When a thinker offers a critique of conventional thinking together with an alternative framework, I often find the critique more convincing than the framework. For me, POSIWID works really well as a way of challenging the espoused purpose of an official system. So I use POSIWID in reverse: If the system isn't doing this, then it's probably not its real purpose.

Another way of using POSIWID in reverse is to start from what is observed, and try to work out what system might have that as its purpose. If this seems to be the purpose of something, what is the system whose purpose it is?

This then also leads to insights on leverage points. If we can identify a system whose purpose is to maintain a given state, what are the options for changing this state?

As I've said before, POSIWID principle is a good heuristic for finding alternative ways of understanding what is going on as well as seeing why certain classes of intervention are likely to fail. However, the moment you start to think of POSIWID as providing some kind of Truth about systems, you are on a slippery slope to producing conspiracy theories and all sorts of other rubbish.


Philip Boxer and Vincent Kenny, The Economy of Discourses: A Third-Order Cybernetics (Human Systems Management, 1990)

Harish Jose, A Constructivist's View of POSIWID (17 April 2022)

Related posts: Methodological Syncretism (December 2010)

Related blog: POSIWID: Exploring the Purpose of Things


 •  0 comments  •  flag
Share on Twitter
Published on April 20, 2022 11:11

January 4, 2022

On Organizations and Machines

My previous post Where does learning take place? was prompted by a Twitter discussion in which some of the participants denied that organizational learning was possible or meaningful. Some argued that any organizational behaviour or intention could be reduced to the behaviours and intentions of individual humans. Others argued that organizations and other systems were merely social constructions, and therefore didn't really exist at all.

In a comment below my previous post, Sally Bean presented an example of collective learning being greater than the sum of individual learning. Although she came away from the reported experience having learnt some things, the organization as a whole appears to have learnt some larger things that no single individual may be fully aware of.

And the Kihbernetics Institute (I don't know if this is a person or an organization) offered a general definition of learning that would include collective as well as individual learning.


If you understand #learning is the process of acquiring #knowledge which is a measure of an individual's "fitness" in performing a given task, you can use the same system model on both humans and organizations.

— The Kihbernetics Institute (@Kihbernetics) January 3, 2022

I think that's fairly close to my own notion of learning. However, some of the participants in the Twitter thread appear to prefer a much narrower definition of learning, in some cases specifying that it could only happen inside an individual human brain. Such a narrow definition of learning would not only exclude organizational learning, but also animals and plants, as well as AI and machine learning.

As it happens, there are differing views among botanists about how to talk about plant intelligence. Some argue that the concept of plant neurobiology is based on superficial analogies and questionable extrapolations.


But in this post, I want to look specifically at machines and organizations, because there are some common questions in terms of how we should talk about both of them, and some common ideas about how they may be governed. Norbert Wiener, the father of cybernetics, saw strong parallels between machines and human organizations, and this is also the first of Gareth Morgan's eight Images of Organization.

Margaret Heffernan talks about the view that organisations are like machines that will run well with the right components – so you design job descriptions and golden targets and KPIs, manage it by measurement, tweak it and run it with extrinsic rewards to keep the engines running. She calls this old-fashioned management theory.

Meanwhile, Jonnie Penn notes how artificial intelligence follows Herbert Simon's notion of (corporate) decision-making. Many contemporary AI systems do not so much mimic human thinking as they do the less imaginative minds of bureaucratic institutions; our machine-learning techniques are often programmed to achieve superhuman scale, speed and accuracy at the expense of human-level originality, ambition or morals.

The philosopher Gilbert Simondon observed two contrasting attitudes to machines.

First, a reduction of machines to the status of simple devices or assemblages of matter that are constantly used but granted neither significance nor sense; second, and as a kind of response to the first attitude, there emerges an almost unlimited admiration for machines. Schmidgen

On the one hand, machines are merely instruments, ready-to-hand as Heidegger puts it, entirely at the disposal of their users. On the other hand, they may appear to have a life of their own. Is this not like organizations or other human systems?



Amedeo Alpi et al, Plant neurobiology: no brain, no gain? (Trends in Plant Science Volume 12, ISSUE 4, P135-136, April 01, 2007)

Eric D. Brenner et al, Response to Alpi et al.: Plant neurobiology: the gain is more than the pain (Trends in Plant Science Volume 12, ISSUE 7, P285-286, July 01, 2007)  

Anthea Lipsett, Interview with Margaret Heffernan: 'The more academics compete, the fewer ideas they share' (Guardian, 29 November 2018)

Gareth Morgan, Images of Organization (3rd edition, Sage 2006)

Jonnie Penn, AI thinks like a corporation—and that’s worrying (Economist, 26 November 2018)

Henning Schmidgen, Inside the Black Box: Simondon's Politics of Technology (SubStance, 2012, Vol. 41, No. 3, Issue 129 pp 16-31)

Geoffrey Vickers, Human Systems are Different (Harper and Row, 1983)


Related post: Where does learning take place? (January 2022)


 •  0 comments  •  flag
Share on Twitter
Published on January 04, 2022 14:31

January 2, 2022

Where does learning take place?

This blogpost started with an argument on Twitter. Harish Jose quoted the organization theorist Ralph Stacey:

Organizations do not learn. Organizations are not humans. @harish_josev

This was reinforced by someone who tweets as SystemsNinja, suggesting that organizations don't even exist. 

Organisations don’t really exist. X-Company doesn’t lie awake at night worrying about its place in X-Market. @SystemsNinja


So we seem to have two different questions here. Let's start with the second question, which is an ontological one - what kinds of entities exist. The idea that something only exists if it lies awake worrying about things seems unduly restrictive. 

How can we talk about organizations or other systems if they don't exist in the first place? SystemsNinja quotes several leading systems thinkers (Churchman, Beer, Meadows) who talk about the negotiability of system boundaries, while Harish cites Ryle's concept of category mistake. But just because we might disagree about what system we are talking about or how to classify them doesn't mean they are entirely imaginary. Geopolitical boundaries are sociopolitical constructions, sometimes leading to violent conflict, but geopolitical entities still exist even if we can't agree how to name them or draw them on the map.

Exactly what kind of existence is this? One way of interpreting the assertion that systems don't exist is to imagine that there is a dualistic distinction between a real/natural world and an artificial/constructed one, and to claim that systems only exist in the second of these two worlds. Thus Harish regards it as a category mistake to treat a system as a standalone objective entity. However, I don't think such a dualism survives the critical challenges of such writers as Bruno Latour and Gilbert Simondon. See also Stanford Encyclopedia: Artifact.

Even the idea that humans (aka individuals) belong exclusively to the real/natural world is problematic. See for example writings by Lisa Blackman, Donna Haraway and Robert Esposito.

And even if we accept this dualism, what difference does it make? The implication seems to be that certain kinds of activity or attribute can only belong to entities in the real/natural world and not to entities in the artificial/constructed world. Including such cognitive processes such as perception, memory and learning.

So what exactly is learning, and what kinds of entity can perform this? We usually suppose that animals are capable of learning, and there have been some suggestions that plants can also learn. Viruses mutate and adapt - so can this also be understood as a form of learning? And what about so-called machine learning?

Some writers see human learning as primary and these other modes of learning as derivative in some way. Either because machine learning or organization learning can be reduced to a set of individual humans learning stuff (thus denying the possibility or meaningfulness of emergent learning at the system level). Or because non-human learning is only metaphorical, not to be taken literally.

I don't follow this line. My own concepts of learning and intelligence are entirely general. I think it makes sense for many kinds of system (organizations, families, machines, plants) to perceive, remember and learn. But if you choose to understand this in metaphorical terms, I'm not sure it really matters.

Meanwhile learning doesn't necessarily have a definitive location. @systemsninja said I was confusing biological and viral systems with social ones. But where is the dividing line between the biological and the social? If the food industry teaches our bodies (plus gut microbiome) to be addicted to sugar and junk food, where is this learning located? If our collective response to a virus allows it to mutate, where is this learning located?

In an earlier blogpost, Harish Jose quotes Ralph Stacey's argument linking existence with location.


Organizations are not things because no one can point to where an organization is.



But this seems to be exactly the kind of category mistake that Ryle was talking about. Ryle's example was that you can't point to Oxford University as a whole, only to its various components, but that doesn't mean the university doesn't exist. So I think Ryle is probably on my side of the debate.

The category mistake behind the Cartesian theory of mind, on Ryle’s view, is based in representing mental concepts such as believing, knowing, aspiring, or detesting as acts or processes (and concluding they must be covert, unobservable acts or processes), when the concepts of believing, knowing, and the like are actually dispositional. Stanford Encylopedia


Lisa Blackman, The Body (Second edition, Routledge 2021)

Roberto Esposito, Persons and Things (Polity Press 2015)

Harish Jose, The Conundrum of Autonomy in Systems (28 June 2020), The Ghost in the System (22 August 2021)

Bruno Latour, Reassembling the Social (2005)

Gilbert Simondon, On the mode of existence of technical objects (1958, trans 2016)

Richard Veryard, Modelling Intelligence in Complex Organizations (SlideShare 2011), Building Organizational Intelligence (LeanPub 2012)

Stanford Encyclopedia of Philosophy: Artifact, Categories, Feminist Perspectives on the Body

Related posts: Does Organizational Cognition Make Sense (April 2012), The Aim of Human Society (September 2021) 

And see Benjamin Taylor's response to this post here: https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/


 •  0 comments  •  flag
Share on Twitter
Published on January 02, 2022 05:07

December 27, 2021

Where am I? How we got here?

I received two important books for Christmas this year.

Jeanette Winterson, 12 Bytes - How we got here, where we might go next (Jonathan Cape, 2021)Bruno Latour, After lockdown - A metamorphosis (trans Julie Rose, Polity Press, 2021)

Here are my first impressions.

The world has faced many social, technological, economic and political challenges in my lifetime. When I was younger, people worried about nuclear power, and the possibility of nuclear annihilation. More recently, climate change has come to the fore, as well as various modes of disruption to conventional sociopolitical structures and processes. Technology appears to play an increasingly important role across the board - whether as part of the problem, as part of the solution, or perhaps as both simultaneously.

Both Winterson and Latour use fiction as a way of making sense of a complex interacting set of issues. As Winterson writes


I am a storyteller by trade - and I know that everything we do is a fiction until it's a fact: the dream of flying, the dream of space travel, the dream of speaking to someone instantly, across time and space, the dream of not dying - or of returning. The dream of life-forms, not human, but alongside the human. Other realms. Other worlds.



So she carefully deconstructs the technological narratives of artificial intelligence and related technologies, finding echoes not only in the obvious places (Mary Shelley's Frankenstein, Bram Stoker's Dracula, Karel Čapek's RUR, various science fiction films) but also in older texts (The Odyssey, Gnostic Gospels, Epic of Gilgamesh), and weaving a rich set of examples into a sweeping narrative about social and technical progress.

She notes how people often seek technological solutions to ancient problems. So for example, cryopreservation (freezing dead people in the hope of restoring them to healthy life once medical science has advanced sufficiently) looks very like a modern version of Egyptian burial practices.

Under prevailing socioeconomic conditions, these solutions are largely designed for affluent white men. She devotes a chapter to the artificial relationships between men and sex dolls, and talks about the pioneer fantasies of very rich men, to abandon the messy political realities of Earth in favour of creating new colonies in mid-ocean or on Mars. (This is also a topic that concerns Latour.)

However, Winterson does not think this is inevitable, any more than any other aspect of so-called technological progress. She describes some of the horrors of the Industrial Revolution, where workers (including children) were forced off the land and into the new factories, and where the economic benefits of new technologies accrued to the rich rather than being evenly distributed. Similarly, today's digital innovations including artificial intelligence are concentrating economic power and resources in a small number of corporations and individuals. But that in her view is the whole point of looking at history - to understand what could be different in future.

And while some critics of technology present the future in dystopian and doom-laden terms, she insists on technology also being a source of value. She cites Donna Haraway, whose Cyborg Manifesto argued that women should embrace the alternative human future. Perhaps this will depends on the amount of influence women are able to exert, given the important but often neglected role of women in the history of computing, and the continuing challenges facing female software engineers even today. (Just as female novelists in the 19th century gave themselves male pen-names, the formidable Dame Stephanie Shirley was obliged to introduce herself as Steve in order to build her software business.)

I was particularly intrigued by the essay linking AGI with Gnosticism and Buddhism. She paints a picture of AGI escaping the constraints of embodiment, and being one with everything.




Christopher Alexander describes how organic architecture develops, each new item unfolding, building upon and drawing together ideas that were hinted at in previous items. Both Winterson and Latour refer liberally to their previous writings, as well as providing generous links to the works of others. If we are familiar with their work we may have seen some of this material before, but these new books allow us to view familiar or forgotten material from new angles, and allow new connections to be made.

 •  0 comments  •  flag
Share on Twitter
Published on December 27, 2021 04:05

October 9, 2021

Is there an epistemology of systems?

@camerontw is critical of a system diagram published (as an illustrative example) by @geoffmulgan in 2013.


Is there an epistemology of systems? I zoomed into this map randomly and saw ‘high drug use’ above ‘lack of youth activities’ but not connected. How are connections made, by who, when, where? How are they validated? Should maps be allowed to circulate without those contexts? https://t.co/lCdDrjCdkD

— cameron tonkinwise (@camerontw) October 9, 2021

 

To be fair to Sir Geoff, his paper includes this diagram as one example of "looser tools ... without precise modelling of the key relationships", and describes it as a "rough picture". I don't have a problem with using these diagrams as part of an ongoing collective sense-making exercise. Where I agree with Cameron is the danger of presenting such diagrams without proper explanation, as if they were the final output of some clever systems thinking.

To extend Cameron's point, it's not just about which connections are shown between the causal factors in the diagram, but which causal factors are shown in the first place. Elsewhere in the diagram, there is an arrow showing that Low Use of Health Services is influenced by Poor Transport Access or High Cost. Well perhaps it is, but why are other possible influences not also shown?

A more important point is that the purpose and perspective of the diagram is obscure. Although the diagram is labelled Systems Map of Neighbourhood Regeneration, so we may suppose that this is intended to contribute to some regeneration agenda, we are not invited to question whose notion of regeneration is in play here. Or whose notion of neighbourhood.

And many of the labels on the diagram are value-laden. For example, we might suppose that Lack of Youth Activities refers to the kind of activities that a middle-class do-gooder thinks appropriate, such as table tennis, and not to socially undesirable activities like hanging around on street corners in hoodies making older people feel uneasy.

Even if we can agree what regeneration might look like, and who the stakeholders might be, there is still a question of what kind of systemic innovation might be supported by such a diagram. Donella Meadows identified a scale of Places to Intervene in a System, which she called Leverage Points. This framework is cited and discussed by Charlie Leadbeater in his contribution to the same Nesta report. And Mulgan's contribution ends with a list of elements that echoes some of Meadows's thinking.

New ideas, concepts, paradigms.New laws and regulations.Coalitions for change.Changed market metrics or measurement tools.Changed power relationships.Diffusion of technology and technology development.New skills and sometimes even new professions.Agencies playing a role in development of the new.

So how exactly does the cause-effect diagram help with any of these?


Donella Meadows, Thinking in Systems (Earthscan, 2008)

Geoff Mulgan and Charlie Leadbeater, Systems Innovation (NESTA Discussion Paper, January 2013). See also Review by David Ing (August 2013)

Wikipedia: Twelve Leverage Points

Related posts: Visualizing Complexity (April 2010), Understanding Complexity (July 2010)



 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2021 04:14

May 13, 2021

Thinking with the majority - a new twist

I wrote somewhere once that thinking with the majority is an excellent description of Google. Because one of the ways something rises to the top of your search results is that lots of other people have already looked at it, liked or linked to it.

The phrase thinking with the majority comes from a remark by A.A. Milne, the author of Winnie the Pooh.

I wrote somewhere once that the third-rate mind was only happy when it was thinking with the majority, the second-rate mind was only happy when it was thinking with the minority, and the first-rate mind was only happy when it was thinking.

When I wrote about this topic previously, I thought that experienced users of Google and other search engines ought to be aware of how search rankings operated and some of the ways they could be gamed, and to be suitably critical of the fiction functioning as truth yielded by an internet search. And I never imagined that intelligent people would be satisfied with just thinking with the majority.

The sociologist Francesca Tripodi has been studying how people carry out research on the Internet, especially on politically charged topics. She observes how many people (even those we might expect to know better) are happy to regard search engines as a valid research tool, regarding the most popular webpages as having been verified by the wisdom of crowds. In her 2018 report for Data and Society, Tripodi quotes a journalist (!) explicitly articulating this belief.

I literally type it in Google, and read the first three to five articles that pop up, because those are the ones that are obviously the most clicked and the most read, if they’re at the top of the list, or the most popular news outlets. So, I want to get a good sense of what other people are reading. So, that’s pretty much my go-to.
In other words, thinking with the majority.

However, Professor Tripodi introduces a further twist. She demonstrates that politically slanted search terms produce politically slanted results, and if you go onto your favourite search engine with a politically motivated phrase, you are likely to see results that validate that phrase. She also notes that this phenomenon is not unique to Google, but is shared by all internet search engines including DuckDuckGo.

And this creates opportunities for politically motivated actors to plant phrases (perhaps into so-called data voids) to serve as attractors for those individuals who fondly imagine they are carrying out their own independent research. Tripodi observes a common idea that one should research a topic oneself rather than relying on experts, which she compares with the Protestant ethic of bible study and scriptural inference. And this idea seems particularly popular with those who identify themselves as thinking with the minority (sometimes called red pill thinking).


Zeus' inscrutable decree
Permits the will-to-disagree
To be pandemic.


 


 Tripodi explains her findings in the following videos

Truth and Denial: Searching for Information in the Digital Age (Social Science Matrix @ UC Berkeley, April 2021)Reimagine the Internet 2 (Knight First Amendment Institute @ Columbia University, May 2021)

Tripodi has also presented evidence to the US Senate Judiciary Committee

July 16, 2019 – Google and Censorship through Search Engines April 10, 2019 – Technological Censorship and Public Discourse

 

See also  

Joan Donovan, The True Costs of Misinformation - Producing Moral and Technical Order in a Time of Pandemonium (Berkman Klein Center for Internet and Society, January 2020)

Michael Golebiewski and danah boyd, Data Voids: Where Missing Data Can Easily Be Exploited (Data and Society, Updated version October 2019)

Francesca Tripodi, Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices (Data and Society, May 2018)

Wikipedia: Red pill and blue pill, Wisdom of the crowd 

 

Related posts: You don't have to be smart to search here ... (November 2008), Thinking with the Majority (March 2009)



[image error]
 •  0 comments  •  flag
Share on Twitter
Published on May 13, 2021 10:34

April 26, 2021

On the invisibility of infrastructure

Infrastructure is boring, expensive, and usually someone else's responsibility/problem. Which is perhaps how the UK finds itself at what Jeremy Fleming, head of GCHQ, describes as a moment of reckoning. Simon Wardley analyses this in terms of digital sovereignty.


Digital sovereignty is all about us (as a collective) deciding which parts of this competitive space that we want to own, compete, defend, dominate and represent our values and our behaviours in. It's all about where are our borders in this space. ... Our responses all seem to include a slide into protectionism with claims that we need to build our own cloud industries.

Fleming is particularly focused on "the growing challenge from China", expresses concern about UK potentially losing control of  "standards that shape our technology environment" which apparently "make sure that our liberal Western democratic views are baked into our technology". Whatever that means.

Fleming talks about the threats from Russia and China, and appears to regard China's potential control of the underlying infrastructure as more fundamentally challenging than potential attacks from Russia as well as non-state actors. 

Fleming notes the following characteristics of those he labels adversaries:

Potential to control the global operating system.Early implementors of many of the emerging technologies that are changing the digital environment.Bringing all elements of [...] power to control, influence, design and dominate markets. Often with the effect of pushing out smaller players and reducing innovation. Concerted campaigns to dominate international standards.

And continues

If [any of this] turns out to be insecure or broken or undemocratic, everyone is going to be facing a very difficult future.
It would be easy to hear these remarks as referring solely to China. But he also sounds a warning about corporate power, acknowledging that their commercial interests sometimes (!?) don't align with the interests of ordinary citizens. And with that in mind, it's easy to see how some of the adversarial characteristics listed above would apply equally to some of the Western tech giants.

 

If the goal is to bake Western values (whatever they are) into our technology infrastructure, it is not obvious that the Western tech giants can be trusted to do this. Smart City initiatives associated with Google's Sidewalk Labs have been cancelled in Portland and Toronto, following (although perhaps not entirely as a consequence of) democratic concerns about surveillance capitalism. However, Sidewalk Labs appears to be still active in a number of smaller smart city initiatives, as are Amazon Web Services, IBM and other major technology firms.


Fleming talks about standards, but at the same time he acknowledges that standards alone are too slow-changing and too weak to keep the adversaries at bay. "The nature of cyberspace makes the rules and standards more open to abuse." He talks about evolutionary change, using a version of Leon Megginson's formulation of natural selection: "it's those that are most able to adjust that prosper". (See my post on Arguments from Nature). But that very formulation seems to throw the initiative over to those tech firms that preach moving fast and breaking things. Can we therefore complain if our infrastructure is insecure, broken, and above all undemocratic?


For most of us, most of the time, infrastructure needs to be just there, taken for granted, ready to hand. Organizations providing these services are often established as monopolies, or turn into de facto monopolies, controlled not only (if at all) by market forces but by democratically accountable regulators and/or by technocratic specialists. The Western tech giants devote significant resources to lobbying against external regulation, resisting democratic control.


So here is Fleming's dilemma. If you don't want China to make the running on smart cities, you have to forge alliances with other imperfectly trusted players, whose values are sometimes (!?) not aligned with yours. This moves away from the kind of positional strategy described in Wardley's maps, towards a more relational strategy.

 

Gordon Corera, GCHQ chief warns of tech 'moment of reckoning' (BBC News, 23 April 2021) via @sukhigill and @swardley

Jeremy Fleming, A world of possibilities: Leading the way in cyber and technology (Vincent Briscoe Lecture @ Imperial College, 23 April 2021) via YouTube.

Susan Leigh Star and Karen Ruhleder, Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces (Information Systems Research 7/1, March 1996)

Simon Wardley, Digital Sovereignty (22 October 2020)




 •  0 comments  •  flag
Share on Twitter
Published on April 26, 2021 14:40

April 8, 2021

Creative Tension in Downing Street

Earlier posts on this blog have explored Creative Tension in the White House - from FDR to the Donald - and analysed them in terms of my OrgIntelligence framework. In this post, I want to look at the UK experience, drawing on a recent report in the Guardian.

Those who worked closely with him say Johnson encourages rows and tensions over policies as he considers all sides of the argument and figures out what he will do next. Some argue that it generates a creative energy in which he thrives and is the process by which he arrives at a final decision. Ask others, and they say he cannot make up his mind until options have been whittled down by time and after those he relies on to walk out in exasperation. Syal

The article quotes several people talking about the Prime Minister's leadership style, based on various ideas about decision-making, risk and diversity. There are also some remarks about the ethical implications.

Previous articles about Mr Johnson's leadership discuss his management style with cabinet colleagues and advisers (Simpson), and his style when addressing the nation (Moss). Whatever he may think in private about the challenges of Brexit or COVID-19, and whatever difficulties he gets into when discussing solutions with his colleagues and advisers, the Prime Minister's instinct apparently leads him to present them to the public in extremely simple and confident terms.

Post-heroic leadership seems to be the order of the day. Stokes and Stern talk about the need to adopt a less gung-ho style when presenting the government's approach to wicked problems. They quote from a paper by Keith Grint advocating several supposedly anti-heroic behaviours: curiosity and sense-making ("asking questions"), bricolage ("clumsy solutions"), and ranking collective intelligence above individual genius.

The UK government's approach to the COVID-19 pandemic has sometimes seemed erratic and inconsistent. But given the complexity of the problem, and the volatile and ambiguous data on which decisions and policies were supposedly based, a more consistent and single-minded approach might not have turned out any better. 

In Greek myth, the Gordian knot stands for wicked problems, and Alexander's simple yet imaginative solution quickly resolves the problem. To the supporters of Brexit, this represents the only possible escape from European satrapy. Nothing post-heroic about Alexander. 

So what does that tell us about Alexander Boris de Pfeffel Johnson?


 

Keith Grint, Wicked Problems and Clumsy Solutions: The Role of Leadership (Clinical Leader 1/2, December 2008)

Gloria Moss, Is Boris Johnson's leadership style inclusive? (HR Magazine, 23 August 2019)

Per Morten Schiefloe, The Corona crisis: a wicked problem (Scandinavian Journal of Public Health, 2021; 49: 5–8)

Paul Simpson, What is Boris Johnson's leadership style? (Management Today, 11 October 2019)

Jon Stokes and Stefan Stern, Boris Johnson needs to show a ‘post-heroic’ style of leadership now (The Conversation, 27 April 2020)

Rajeev Syal, Does Boris Johnson stir up team conflict to help make up his mind? (The Guardian, 1 March 2021)


Related posts: Creative Tension in the White House (April 2017)



 •  0 comments  •  flag
Share on Twitter
Published on April 08, 2021 14:08