‘The Machine'
Machines order information to either accelerate or slow and compress or extend a particular aspect of space and time. Whether they are ‘husbandry’, ‘mechanical’, 'electro-mechanical' or ‘neurological’, when machines are choreographed in sequence they tend to produce sufficient information to generate an output to which we employ tools to make these outputs more familiar to us. These two aspects of aggregation form the “real-estate” that grant us access to phenomena which conspire out of space & time.
Performative media are nodes in a neurological machine that contribute to its ‘fabric’: a materiality “woven” from partial fragments into a recognisable whole. They are important operatives that do more than traditional media which simply inform and describe, they accomplish an ‘act’ through the process of their design configuration, or to put another way, they order information to conduct a performance through their arrangement of components and parts by way of a monad. I suspect when artifacts interrelate and transform the interests of an array of actor-networks and agents through performative media, it might need to be assessed and audited by expression of a monad so that we may reliably observe it as having made something happen - (though I’m not altogether certain of this statement).
The Monad is associated with 17th Century physicist Gottfried Wilhelm Leibniz where he remarked of the ‘monad’ as being like an “atom-of nature” that contains within itself a reflection of the whole world. When not expressed in such philosophical terms such as when it applies to category theory, a monad is a ‘functor’ that is a type of function “interrogator” able to order parts corresponding with components expressed as a function of their compositional imperative. I also think that the monad might be to performative-media what a sentence is to speech. Where sentences are parsed in combination with grammar and propositional logic and the type of role a forensic-storyteller would perform, performative-media is parsed in part by category theory and possibly set theory and would signify the type of role performed by a forensic-wrangler - (but I’m not certain of this statement yet either).
Monads like sentences capture a state-of-being, of-affairs and of-facts. Though where there are probably an infinite number of sentence combinations that describe the human condition and phenomena more broadly, there may only be a finite number of monads which capture human states of being. Indeed there could for instance be 8 to 80 thousand individual and identifiable human states, 2 to 20 thousand that we may be capable of recording for a dog or cat for example and despite estimation of 30,000 apple varieties they might only express 150 to 200 discrete states and fewer than 20 for simple celled organisms - but I’m still just guessing. Instrumental in obtaining the texture of a neurological fabric will I think be brought to bear from validation of a state that the monad signifies in an effort to have the most applicable and appropriate performative media in the right place, in the correct order, approximating the most accurate state-of-being and meaning at the right time.
In schools children are taught and supported on how to structure a sentence to write and essay. In the not to distant future I think that students will be supported with rehearsing how to compose monads in order to choreograph performative-media for ‘the machine’. Every drive for optimisation signals the end of one and the beginning of another societal epoch, and the drive to improve literacy in the working classes from the 1840’s to about 1993 in the UK, was in part in response to requiring workers to interface with increasingly complex instruction-sets of machinery and materials. I think what’s happened since the mid 1990’s is that migration away from mainly mechanical machines to progressively neurological versions of systems and “the machine” have caused gaps to appear in the fabric of ‘the machine’.
There are several reasons why events have unfolded in this way and caused gaps to appear but as a result and in employment terms specifically, these ‘gaps’ represent spaces where knowledge, historically, would have been filled by a parent passing skills to their child or filled by a craft-person transferring techniques on to an apprentice. In these gaps, created by so many under-developed neurological nodes - our children, programmers write code for machines to fill knowledge gaps in the fabric. In other words, programmers act like parent to machines that they code and 21st Century machines are indeed corporate children. The principle difference today in terms of education and work is the relationship components and parts have to production. In the 19th Century, factory owners employed people to work alongside steam powered machines. 20th Century factories hired labour for jobs using electronic-machines and skills transitioned from the previous century to the next. Aside from the role of ‘operative’ tasked with synchronising parts of the machine, for this brief moment in history 21st Century children’s historical proximity with machines as a means of production has been largely suspended and replaced.
quirkyposture
"...for those of us curious about performative media and in building a neurological fabric using forensic stories"
Friday, November 19, 2021
Tuesday, November 03, 2020
‘Programming-for-the-rest-of-Us'
Does trans-disciplinarity mean that each of us will need to study for at least 4 separate degrees? - One for learning our craft of choice; another to learn communication and digital media skills, a third for business studies and entrepreneurship and a fourth for computer science with programming practice at its core? Possibly it does but that would suggest at least a £200,000 personal investment at current prices. Realistically and instead of basing future learning on the legacy of medieval guilds, level-5 and 6 educations may need to adopt a more collegiate approach such as kind applied at ‘The London Interdisciplinary School’ in Whitechapel (https://www.londoninterdisciplinaryschool.org/). You see, I think machines designed for a single use-case will be the specialists of tomorrow. They will perform at speeds with fidelity impossible for a human. So what this means is advancement in artificial intelligence and machine learning will likely set the “ceiling” for the quantity of forensic wranglers industry will be willing to pay for, and decide the value and therefore the price of a Ph.D by the end of the 2020’s in exactly the same way that capital and plant decided the value and price of labour at the start of the 1820’s.
‘Digital’ has been great at championing itself and has made its self easy to notice. So let’s focus on the type of ‘dish’ we may want to make because this aspect is generally given far less attention. It takes decades for humans to diffuse the benefits of a new technology throughout its social, economic and political ecology and it’s considerably harder to imagine and implement the hundreds, if not thousands of small complimentary innovations at the level necessary for humans to make changes to factories, offices and society. Quirkyposture’s ambition is for 10% of the world’s population (837.5 million) to know how to “programme” by 2099. Now, you’ll be forgiven for thinking why such a low estimate? Roughly speaking it’s taken our species about 10,000 years for 86.3% of us on average to be able to read and write to other biological machine the same as ourselves and, at the time of writing this blog entry, 60 years has passed for 0.4% of us to acquire the skill to read and write to non-biological machines, which of course we commonly call computers. No doubt that by 2099 there will still be people who cannot read and write sadly but the rest of us won’t stop learning to communicate in new ways by then, it’s just that Qurkyposture is all about people developing 21st not necessarily 22nd Century skills. One of the most inspiring examples of performative media and one for our hairdresser to take a look at mentioned in a previous post, is a TED-talk delivered by designer and architect Neri Oxman: (https://www.ted.com/talks/neri_oxman_design_at_the_intersection_of_technology_and_biology)... do please check it out!
Complementary to Neri Oxman and her team’s work, is the Herculean effort made by the Wellcome Trust Sanger Institute, ‘Darwin Tree of Life’ Project. Since 2019 they and 9 other institutions across the UK have been active in sequencing the genome of 2,000 species, with plans announced in 2022 to further sequence the genome for an estimated 70,000 species including plants, animals and fungi, found in Britain and Ireland by 2030: (https://innovationstories.sanger.ac.uk/completing-the-puzzle-of-life-on-earth). This visionary initiative will be Britain’s contribution to the Earth BioGenome Project (EBP) which is the global effort to sequence the genomes of all species on our planet. New Growth Theory divides analysis into two distinct categories: ‘instructions’ and ‘materials’. Instructions represent “ideas” and materials become “things”. Materials can be thought of as goods with mass, like pots and pans, or mediums, such as when in a form like electricity for instance. Genetics store information to both instructions and material states in a single substance, which potentially makes it an incredibly efficient fabricant that we are only at the beginning of exploring. As legacy industries, have looked to reduce cost by setting up in developing economies, post industrial nations are tasked with finding new product and services derived from new economic analysis. Proponents of ‘New Growth Theory’ argue that the transformation from a traditional Industrial to a post-industrial Information age implies a seismic change to the rules of business which I will repeatedly emphasise in this blog as essentially neurological.
Performative media are "make happen" agents and contain aspects that resemble what smartphone apps generally do, but the subject of my book will be an exploration of the “applification” in mediums pertaining to physical things in the main. Forensic stories are “make aware” content and what they do is reveal the relationships between different mediums that conspire to build a performative media.
Does trans-disciplinarity mean that each of us will need to study for at least 4 separate degrees? - One for learning our craft of choice; another to learn communication and digital media skills, a third for business studies and entrepreneurship and a fourth for computer science with programming practice at its core? Possibly it does but that would suggest at least a £200,000 personal investment at current prices. Realistically and instead of basing future learning on the legacy of medieval guilds, level-5 and 6 educations may need to adopt a more collegiate approach such as kind applied at ‘The London Interdisciplinary School’ in Whitechapel (https://www.londoninterdisciplinaryschool.org/). You see, I think machines designed for a single use-case will be the specialists of tomorrow. They will perform at speeds with fidelity impossible for a human. So what this means is advancement in artificial intelligence and machine learning will likely set the “ceiling” for the quantity of forensic wranglers industry will be willing to pay for, and decide the value and therefore the price of a Ph.D by the end of the 2020’s in exactly the same way that capital and plant decided the value and price of labour at the start of the 1820’s.
‘Digital’ has been great at championing itself and has made its self easy to notice. So let’s focus on the type of ‘dish’ we may want to make because this aspect is generally given far less attention. It takes decades for humans to diffuse the benefits of a new technology throughout its social, economic and political ecology and it’s considerably harder to imagine and implement the hundreds, if not thousands of small complimentary innovations at the level necessary for humans to make changes to factories, offices and society. Quirkyposture’s ambition is for 10% of the world’s population (837.5 million) to know how to “programme” by 2099. Now, you’ll be forgiven for thinking why such a low estimate? Roughly speaking it’s taken our species about 10,000 years for 86.3% of us on average to be able to read and write to other biological machine the same as ourselves and, at the time of writing this blog entry, 60 years has passed for 0.4% of us to acquire the skill to read and write to non-biological machines, which of course we commonly call computers. No doubt that by 2099 there will still be people who cannot read and write sadly but the rest of us won’t stop learning to communicate in new ways by then, it’s just that Qurkyposture is all about people developing 21st not necessarily 22nd Century skills. One of the most inspiring examples of performative media and one for our hairdresser to take a look at mentioned in a previous post, is a TED-talk delivered by designer and architect Neri Oxman: (https://www.ted.com/talks/neri_oxman_design_at_the_intersection_of_technology_and_biology)... do please check it out!
Complementary to Neri Oxman and her team’s work, is the Herculean effort made by the Wellcome Trust Sanger Institute, ‘Darwin Tree of Life’ Project. Since 2019 they and 9 other institutions across the UK have been active in sequencing the genome of 2,000 species, with plans announced in 2022 to further sequence the genome for an estimated 70,000 species including plants, animals and fungi, found in Britain and Ireland by 2030: (https://innovationstories.sanger.ac.uk/completing-the-puzzle-of-life-on-earth). This visionary initiative will be Britain’s contribution to the Earth BioGenome Project (EBP) which is the global effort to sequence the genomes of all species on our planet. New Growth Theory divides analysis into two distinct categories: ‘instructions’ and ‘materials’. Instructions represent “ideas” and materials become “things”. Materials can be thought of as goods with mass, like pots and pans, or mediums, such as when in a form like electricity for instance. Genetics store information to both instructions and material states in a single substance, which potentially makes it an incredibly efficient fabricant that we are only at the beginning of exploring. As legacy industries, have looked to reduce cost by setting up in developing economies, post industrial nations are tasked with finding new product and services derived from new economic analysis. Proponents of ‘New Growth Theory’ argue that the transformation from a traditional Industrial to a post-industrial Information age implies a seismic change to the rules of business which I will repeatedly emphasise in this blog as essentially neurological.
Performative media are "make happen" agents and contain aspects that resemble what smartphone apps generally do, but the subject of my book will be an exploration of the “applification” in mediums pertaining to physical things in the main. Forensic stories are “make aware” content and what they do is reveal the relationships between different mediums that conspire to build a performative media.
Tuesday, January 24, 2012
Cooking for yourself a Digital Dish
I felt compelled to post this for the 1.2m young people in the UK currently trying to find work.
I spent 6 years indirectly researching in one form or another, how populations in developed countries, were going to evolve and compete in a global economy. I arrived at the following 6 minute statement:
Question:
How do you convince an apple grower that apple pie would be good for their business, if neither farming exists nor indeed knowledge of what a pie is?
Answer:
Make one
Question:
How do you convince a strawberry picker that Jam could be great for their business if they've no idea what Jam is, added to that, glass hasn't yet been invented?
Better Answer:
Give them the tools to play and the technology to experiment with.
To draw an analogy, digital is now at a similar stage to when our ancestors first discovered fire. Before that moment, (fermentation aside) all food was consumed raw. As the idea of passing heat through food took hold, our ancestors would have experimented with different food and discovered what food types worked well and which badly. Raw food eaters would have cited examples of blackened inedible food as proof that fire and raw food should never be combined. Pickers may have protested that fire was useless because it destroyed the delicious fruit they collected.
Gradually however and despite numerous disasters, communities of cooked food eaters would have faired better than picker communities and peddlers of raw food. This combination of play (experimentation) and technology (fire) exposed our ancestors to explore a significant and important part of their world with granular fidelity; resulting in communities a third more energised, evolving over Century’s and extending their repertoire to fire pots, blow glass, forge steel and generate steam power.
“Digital is the catalyst for disruptive change now, as fire was for our ancestors then! Fire enabled granular level manipulation of the physical world and exacted a profound change on human existence, the contribution digital will make on the human journey is equally seismic, the effects of which will be equally profound and predicted to last as long.”
It’s here that we now find ourselves. Digital technology is comparable to fire used by our ancestors to cook food. It is the ingredient when added to land, labour, capital, people, ideas and things required to meet un-met demand, create new markets, new products and new jobs.
… a few economic tenets
The underlying governance for businesses in developed countries is driven by a persistent need to reduce costs, increase productivity and minimise risk; it’s not governed by a job creation criterion. Jobs are a bi-product of increased productivity, as this reduces the price of goods, leaving more money in the pockets of you and me, which we are then encouraged to spend on more goods, thus creating more jobs. Regrettably these new jobs are only created if they promise increased efficiency.
These are the same forces which dictate new products, will only be created if there is an un-met demand in the market and if they promise to increase value for the producing firm. Often profit margins are so small on new products, that sometimes profit can only be realised if the cost of workers is kept low. This means labour intensive industries will seek out new ways to keep the cost of employing people as low as possible including automating production altogether.
However for many companies, this often means creating jobs abroad in developing countries, as further job creation in this country is no longer efficient for growth. What this amounts to is very little, or no room for new entrants, at least not until a major firm in the market goes bust. It’s a bit like the comical story of working 50 years for a firm, with little or no chance of promotion, waiting, until 104 year old "Mr Worthing" either retires or dies.
… why industry is as it is
The majority of established UK businesses are based on 300 year old industries. Through decades of invention, innovation and reorganisation they’ve reached the 21st Century as efficient as they can be. They are rooted by physical laws and limitations; primarily based on a process of converting resources of relative abundance into naturally scarce and sometimes artificially made scare goods. I’m going to be bold and suggest that November 2008 was the date when we witnessed the end point of 300 years of how post-industrialised countries have conducted business, in a singularly physical world. The transitions from physical to a predominantly neurological economy began in the 1970’s but I think 2004 to 2008 was the final tipping point.
Established industries that evolved pre-digital carry a "legacy"; derived from years of hard fought relationships combined with complex systems; cemented in acculturation of familiar repertoires and known outcomes. Their “systems” are costly to sustain and are often deliberately expensive in order to minimise competition from new entrants, due to high barriers-of-entry, defined by an elite minority: Similar by degree to raw food communities, legacy industry business models require that “food be consumed raw” to survive at their current size and format! Digital threatens to break how they make money; restructure their “form-factor”, or in other words, alter the size, scope and reach of individual firms; alter who or what represents an “elite” and is set to significantly change several other business assumptions and practices.
In the past individual hunters may have hand picked the best raw food source and been held in high regard for their knowledge and courage but enter livestock farmers. Pre-fire meant specialists of raw food preparation were critical. Post-fire suggests previously inedible items, could be safely eaten but more importantly, communities were not entirely dependent on a minority of elite hunters for their survival: Neither could raw food preparation specialist easily argue the importance of their role, or credit themselves the outright point of access to the final product.
Legacy industries have been great hunters, how do we persuade them to become great farmers? Isn’t it about having the confidence to "grow" more value then any one company or individual can capture, or convert into a sale: Maximising the area from which to harvest from, even if by doing so, directly benefits your competition? It’s a subject certainly worthy of another blog post.
At best, legacy industries don’t know how to grow a neurological economy, or are reluctant too. At worst they will pursue change through legislation to try and prevent the adoption of “cooked food” and will not stimulate change that threatens to break the methods they’ve adopted to make money, or positions others like the wood chopper, fire-starter, potter, cook, musician, storyteller or dancer between them and the point of consumption, without directly receiving revenue from this new economy
Pickers will not instigate change because they are confident that fruit taste just great as it is; but without the discovery of fire, experimentation, play and subsequent cooked food, apple pie and strawberry tarts would never taste as good.
Just like light from a flame can be used to illuminate raw food but not cook it, evidence suggests that legacy economies, are using digital to reveal their products and services, in new areas, previously too niche, too risky and too “dimly lit” to justify expenditure before now. Offering the smell of cooked food, promises something delicious but this deception may also be used, to simply mask a serving of the same uncooked offering. It seems then that digital outcomes for the 21st century require thinking throughout the entire organisation and as a nation.
… starting from today, a version of “work” for many of us
Individuals and communities independently exploring digital technologies, represent a tool of relatively low involvement for legacy businesses to experiment, explore and play with digital, at minimal risk and cost implications to them. Then it’s up to those individuals and communities to either package and sell the result of their exploration, experimentation and play indirectly to those legacy industries, or look to directly commercially exploit their discoveries themselves.
Both semi-skilled and professional White-Collar workers face sections of their job initially outsourced to parts of the world where labour costs are cheaper, call centres being the most obvious example. Secondly and some may argue of greater concern is, as UK firms pursue greater efficiencies, automation from Machine-to-Machine exchanges will reduce employment even further, think automated supermarket check-outs, automated shelf-stacking, un-manned train station ticket offices, driverless trains, voice recognition enabled word-processing, airport baggage handling etc. Considering that this list is expected to accelerate over the coming years, possibly outpacing our ability to compete directly, suggests we recognise aspects of production that humans excel at and describes an idea better represented in figure1 and 2.
The Luddite Fallacy argues “machines will make human labour obsolete”. Historically however this has not been the case. Technology has enabled fewer people become more productive and cost effective. Thus growing the overall economy and creating new and unique jobs for humans in other industries. But what happens when machines become the "worker"? I wonder how much 18th century industrial Britain would have resembled that described in history books, If technology like the Spinning Jenny or Watt’s Steam Engine within fifty years, had become as smart as the “job(s)” required but still continued to improve; whilst getting cheaper every two years; with a quantifiable increase in speed every two years and a perceptible increase in quality every four years; possibly halving capital investment every five years; that invoked a disruptive technological design on an eight yearly cycle; and which altered the business models and methods of making money for existing producers; prompting marginal costs to fall closer to zero after every cycle; summoning a ten times increase in the number of people who could now afford access to these new methods of production every ten years… without considering the impact these factors would have had on when capitalisation occurred; how that would have altered investor attitudes to what constituted physical capital and where money should be invested; indeed, questioning the very notion of industrialisation, production intensification and if or when it should happen at all…raising further questions of how companies scaled and increased quantity, what impact that would have on the price of goods. Finally all of this dynamism equating to the resultant effect on how, when and where human labour and skills added value…“phew!”
… we all have to start from where we are
Part of the solution is skilled workers have to know how to create their own little bit of “fire” and learn how to “cook their own food”. It means a future whereby just knowing a craft will not be enough. Traditionally this has been a possession of the Working Class, conducting their daily tasks through direct instruction, exchanging tangible, often manual skills for paid employment. It seems the working class now live in places like China, India, Vietnam and Brazil doesn’t it? So called “Semi-skilled” and “White Collar” workers don’t belong to the working class anymore, they are now part of the “Creative Class” represented by workers; often university graduates, who will need to create work for themselves.
It will require that we extend this list to include vocational skilled workers, investing time to organise themselves and build heterarchical not hierarchical connections and professional networks; developing entrepreneurship and lastly knowing how to code: For not knowing how to programme in the 21st century, will be like not knowing how to read and write in the 20th. Just as our ancestors produced cave paintings, then speech and later writing, programming is the next evolution in the human journey and expresses our compulsion to exchange ideas and communicate enabled by cheap, easily accessible and readily available tools. Knowing how to programme however is more profound than just another chapter in the history of human communication.
The act of programming is actually about accessing a raw resource and manipulating that resource to produce something with value added. Comparable to how a furniture maker manipulates wood to make a chair or a seamstress manipulates cotton to manufacture a dress. A programming language is imbued with the fundamental properties required to make derivative goods and programming is the discipline necessary to manipulate those properties... "Programming is the refinery for data - the new crude oil of all nations!"
Departing further from our fire analogy for just a moment longer, remaining in the physical world of tangible goods, if an equivalent entity to programming existed in the physical world, with enough skill, our furniture maker would be able to manufacture the timber frame of a house, sculpt a wooden statue or build a boat, not to mention other types of wood derivatives, with the same efficacy as any producer with a capital investment dedicated to any of the markets specific to this list. Unburdened by physical limits such as a boat yard, or precision tools, timber availability and subsequent price fluctuations, our furniture maker already well rehearsed in the code to make wooden goods, could if he so wished, learn the programming language for cotton, ceramic or metals relatively quickly.
… it’s all about weaving together a physical and neurological “fabric”
What will make programming so remarkable is when humans learn to weave the neurological world consisting of sensors, actuators, photonics, internet protocols, and virtual products currently experienced in web space as apps, games, video and documents, directly into the fabric of the physical world. Not everyone will be able, inclined or competent enough to programme to the same standard as a Google software engineer, just as I’m not convinced that everyone has the need or innate ability to write as well as well as any leading author of novels or poetry, all of the time or even some of the time.
That’s not to say that a hair-dresser in Bolton shouldn’t strive to learn how to "programme", despite maybe never possessing the aptitude to programme professionally, in the same way not everyone who learns to read and write needs to become a professional writer. I'm convinced that our reliance on 300 year old industries cannot continue. Although I humbly admit that these are also the same 300 year old industries, who own the vast majority of intellectual property rights required for new growth. I am also convinced for growth in developed nations to occur, there needs to be a cessation of a tendency for industry separateness combined with access to intellectual property held by a few, made available to many, considering that a great idea, that magnificent thing, could come from just about anywhere.
Indeed, might not a hairdresser from Bolton who programmes albeit not as well as a Microsoft software engineer, code a pair of scissors that not only cuts hair beautifully but produces analyses about her customer’s mineral deficiencies, cholesterol levels and who incidentally may have just popped in for a perm, is spiking data that she’s pregnant…with twins!!
Despite the possibilities for programmers from non-traditional sources participating in a neurological world, learning how to programme in its current form is not easy and often feels like one is rubbing two sticks together for an awfully long time before things begin to smoke let alone ignite. So if you agree with the proposition in this post, the big challenge facing developed nations has to be how to teach programming and make it relevant to the majority.
It’s not because UK industry needs it; free-market economics will seek out efficiencies regardless, whether here, overseas, people, machines or total automation. No, we have to find a way for actors, archaeologists, historians, sales assistants, writers, artist, beauty therapist, journalists,musicians, hairdressers and builders play a part in a future dominated by digital and help individuals become producers and not just digital consumers serve up there own example of a digital dish.
I felt compelled to post this for the 1.2m young people in the UK currently trying to find work.
I spent 6 years indirectly researching in one form or another, how populations in developed countries, were going to evolve and compete in a global economy. I arrived at the following 6 minute statement:
Question:
How do you convince an apple grower that apple pie would be good for their business, if neither farming exists nor indeed knowledge of what a pie is?
Answer:
Make one
Question:
How do you convince a strawberry picker that Jam could be great for their business if they've no idea what Jam is, added to that, glass hasn't yet been invented?
Better Answer:
Give them the tools to play and the technology to experiment with.
To draw an analogy, digital is now at a similar stage to when our ancestors first discovered fire. Before that moment, (fermentation aside) all food was consumed raw. As the idea of passing heat through food took hold, our ancestors would have experimented with different food and discovered what food types worked well and which badly. Raw food eaters would have cited examples of blackened inedible food as proof that fire and raw food should never be combined. Pickers may have protested that fire was useless because it destroyed the delicious fruit they collected.
Gradually however and despite numerous disasters, communities of cooked food eaters would have faired better than picker communities and peddlers of raw food. This combination of play (experimentation) and technology (fire) exposed our ancestors to explore a significant and important part of their world with granular fidelity; resulting in communities a third more energised, evolving over Century’s and extending their repertoire to fire pots, blow glass, forge steel and generate steam power.
“Digital is the catalyst for disruptive change now, as fire was for our ancestors then! Fire enabled granular level manipulation of the physical world and exacted a profound change on human existence, the contribution digital will make on the human journey is equally seismic, the effects of which will be equally profound and predicted to last as long.”
It’s here that we now find ourselves. Digital technology is comparable to fire used by our ancestors to cook food. It is the ingredient when added to land, labour, capital, people, ideas and things required to meet un-met demand, create new markets, new products and new jobs.
… a few economic tenets
The underlying governance for businesses in developed countries is driven by a persistent need to reduce costs, increase productivity and minimise risk; it’s not governed by a job creation criterion. Jobs are a bi-product of increased productivity, as this reduces the price of goods, leaving more money in the pockets of you and me, which we are then encouraged to spend on more goods, thus creating more jobs. Regrettably these new jobs are only created if they promise increased efficiency.
These are the same forces which dictate new products, will only be created if there is an un-met demand in the market and if they promise to increase value for the producing firm. Often profit margins are so small on new products, that sometimes profit can only be realised if the cost of workers is kept low. This means labour intensive industries will seek out new ways to keep the cost of employing people as low as possible including automating production altogether.
However for many companies, this often means creating jobs abroad in developing countries, as further job creation in this country is no longer efficient for growth. What this amounts to is very little, or no room for new entrants, at least not until a major firm in the market goes bust. It’s a bit like the comical story of working 50 years for a firm, with little or no chance of promotion, waiting, until 104 year old "Mr Worthing" either retires or dies.
… why industry is as it is
The majority of established UK businesses are based on 300 year old industries. Through decades of invention, innovation and reorganisation they’ve reached the 21st Century as efficient as they can be. They are rooted by physical laws and limitations; primarily based on a process of converting resources of relative abundance into naturally scarce and sometimes artificially made scare goods. I’m going to be bold and suggest that November 2008 was the date when we witnessed the end point of 300 years of how post-industrialised countries have conducted business, in a singularly physical world. The transitions from physical to a predominantly neurological economy began in the 1970’s but I think 2004 to 2008 was the final tipping point.
Established industries that evolved pre-digital carry a "legacy"; derived from years of hard fought relationships combined with complex systems; cemented in acculturation of familiar repertoires and known outcomes. Their “systems” are costly to sustain and are often deliberately expensive in order to minimise competition from new entrants, due to high barriers-of-entry, defined by an elite minority: Similar by degree to raw food communities, legacy industry business models require that “food be consumed raw” to survive at their current size and format! Digital threatens to break how they make money; restructure their “form-factor”, or in other words, alter the size, scope and reach of individual firms; alter who or what represents an “elite” and is set to significantly change several other business assumptions and practices.
In the past individual hunters may have hand picked the best raw food source and been held in high regard for their knowledge and courage but enter livestock farmers. Pre-fire meant specialists of raw food preparation were critical. Post-fire suggests previously inedible items, could be safely eaten but more importantly, communities were not entirely dependent on a minority of elite hunters for their survival: Neither could raw food preparation specialist easily argue the importance of their role, or credit themselves the outright point of access to the final product.
Legacy industries have been great hunters, how do we persuade them to become great farmers? Isn’t it about having the confidence to "grow" more value then any one company or individual can capture, or convert into a sale: Maximising the area from which to harvest from, even if by doing so, directly benefits your competition? It’s a subject certainly worthy of another blog post.
At best, legacy industries don’t know how to grow a neurological economy, or are reluctant too. At worst they will pursue change through legislation to try and prevent the adoption of “cooked food” and will not stimulate change that threatens to break the methods they’ve adopted to make money, or positions others like the wood chopper, fire-starter, potter, cook, musician, storyteller or dancer between them and the point of consumption, without directly receiving revenue from this new economy
Pickers will not instigate change because they are confident that fruit taste just great as it is; but without the discovery of fire, experimentation, play and subsequent cooked food, apple pie and strawberry tarts would never taste as good.
Just like light from a flame can be used to illuminate raw food but not cook it, evidence suggests that legacy economies, are using digital to reveal their products and services, in new areas, previously too niche, too risky and too “dimly lit” to justify expenditure before now. Offering the smell of cooked food, promises something delicious but this deception may also be used, to simply mask a serving of the same uncooked offering. It seems then that digital outcomes for the 21st century require thinking throughout the entire organisation and as a nation.
… starting from today, a version of “work” for many of us
Individuals and communities independently exploring digital technologies, represent a tool of relatively low involvement for legacy businesses to experiment, explore and play with digital, at minimal risk and cost implications to them. Then it’s up to those individuals and communities to either package and sell the result of their exploration, experimentation and play indirectly to those legacy industries, or look to directly commercially exploit their discoveries themselves.
Both semi-skilled and professional White-Collar workers face sections of their job initially outsourced to parts of the world where labour costs are cheaper, call centres being the most obvious example. Secondly and some may argue of greater concern is, as UK firms pursue greater efficiencies, automation from Machine-to-Machine exchanges will reduce employment even further, think automated supermarket check-outs, automated shelf-stacking, un-manned train station ticket offices, driverless trains, voice recognition enabled word-processing, airport baggage handling etc. Considering that this list is expected to accelerate over the coming years, possibly outpacing our ability to compete directly, suggests we recognise aspects of production that humans excel at and describes an idea better represented in figure1 and 2.
The Luddite Fallacy argues “machines will make human labour obsolete”. Historically however this has not been the case. Technology has enabled fewer people become more productive and cost effective. Thus growing the overall economy and creating new and unique jobs for humans in other industries. But what happens when machines become the "worker"? I wonder how much 18th century industrial Britain would have resembled that described in history books, If technology like the Spinning Jenny or Watt’s Steam Engine within fifty years, had become as smart as the “job(s)” required but still continued to improve; whilst getting cheaper every two years; with a quantifiable increase in speed every two years and a perceptible increase in quality every four years; possibly halving capital investment every five years; that invoked a disruptive technological design on an eight yearly cycle; and which altered the business models and methods of making money for existing producers; prompting marginal costs to fall closer to zero after every cycle; summoning a ten times increase in the number of people who could now afford access to these new methods of production every ten years… without considering the impact these factors would have had on when capitalisation occurred; how that would have altered investor attitudes to what constituted physical capital and where money should be invested; indeed, questioning the very notion of industrialisation, production intensification and if or when it should happen at all…raising further questions of how companies scaled and increased quantity, what impact that would have on the price of goods. Finally all of this dynamism equating to the resultant effect on how, when and where human labour and skills added value…“phew!”
… we all have to start from where we are
Part of the solution is skilled workers have to know how to create their own little bit of “fire” and learn how to “cook their own food”. It means a future whereby just knowing a craft will not be enough. Traditionally this has been a possession of the Working Class, conducting their daily tasks through direct instruction, exchanging tangible, often manual skills for paid employment. It seems the working class now live in places like China, India, Vietnam and Brazil doesn’t it? So called “Semi-skilled” and “White Collar” workers don’t belong to the working class anymore, they are now part of the “Creative Class” represented by workers; often university graduates, who will need to create work for themselves.
It will require that we extend this list to include vocational skilled workers, investing time to organise themselves and build heterarchical not hierarchical connections and professional networks; developing entrepreneurship and lastly knowing how to code: For not knowing how to programme in the 21st century, will be like not knowing how to read and write in the 20th. Just as our ancestors produced cave paintings, then speech and later writing, programming is the next evolution in the human journey and expresses our compulsion to exchange ideas and communicate enabled by cheap, easily accessible and readily available tools. Knowing how to programme however is more profound than just another chapter in the history of human communication.
The act of programming is actually about accessing a raw resource and manipulating that resource to produce something with value added. Comparable to how a furniture maker manipulates wood to make a chair or a seamstress manipulates cotton to manufacture a dress. A programming language is imbued with the fundamental properties required to make derivative goods and programming is the discipline necessary to manipulate those properties... "Programming is the refinery for data - the new crude oil of all nations!"
Departing further from our fire analogy for just a moment longer, remaining in the physical world of tangible goods, if an equivalent entity to programming existed in the physical world, with enough skill, our furniture maker would be able to manufacture the timber frame of a house, sculpt a wooden statue or build a boat, not to mention other types of wood derivatives, with the same efficacy as any producer with a capital investment dedicated to any of the markets specific to this list. Unburdened by physical limits such as a boat yard, or precision tools, timber availability and subsequent price fluctuations, our furniture maker already well rehearsed in the code to make wooden goods, could if he so wished, learn the programming language for cotton, ceramic or metals relatively quickly.
… it’s all about weaving together a physical and neurological “fabric”
What will make programming so remarkable is when humans learn to weave the neurological world consisting of sensors, actuators, photonics, internet protocols, and virtual products currently experienced in web space as apps, games, video and documents, directly into the fabric of the physical world. Not everyone will be able, inclined or competent enough to programme to the same standard as a Google software engineer, just as I’m not convinced that everyone has the need or innate ability to write as well as well as any leading author of novels or poetry, all of the time or even some of the time.
That’s not to say that a hair-dresser in Bolton shouldn’t strive to learn how to "programme", despite maybe never possessing the aptitude to programme professionally, in the same way not everyone who learns to read and write needs to become a professional writer. I'm convinced that our reliance on 300 year old industries cannot continue. Although I humbly admit that these are also the same 300 year old industries, who own the vast majority of intellectual property rights required for new growth. I am also convinced for growth in developed nations to occur, there needs to be a cessation of a tendency for industry separateness combined with access to intellectual property held by a few, made available to many, considering that a great idea, that magnificent thing, could come from just about anywhere.
Indeed, might not a hairdresser from Bolton who programmes albeit not as well as a Microsoft software engineer, code a pair of scissors that not only cuts hair beautifully but produces analyses about her customer’s mineral deficiencies, cholesterol levels and who incidentally may have just popped in for a perm, is spiking data that she’s pregnant…with twins!!
Despite the possibilities for programmers from non-traditional sources participating in a neurological world, learning how to programme in its current form is not easy and often feels like one is rubbing two sticks together for an awfully long time before things begin to smoke let alone ignite. So if you agree with the proposition in this post, the big challenge facing developed nations has to be how to teach programming and make it relevant to the majority.
It’s not because UK industry needs it; free-market economics will seek out efficiencies regardless, whether here, overseas, people, machines or total automation. No, we have to find a way for actors, archaeologists, historians, sales assistants, writers, artist, beauty therapist, journalists,musicians, hairdressers and builders play a part in a future dominated by digital and help individuals become producers and not just digital consumers serve up there own example of a digital dish.
Subscribe to:
Posts (Atom)