29 Aug 2018
Is your brand getting the most out of social media?
There’s a whole heap of conversations going on at the moment, across pretty much every industry about AI, its development and the future. How will it benefit society? Will it render millions of jobs obsolete? Will it actually streamline our existing systems? And more importantly, will it take over the world and will machines become sentient and go Terminator on us?
The answer general consensus seems to be no…
But then again, did anyone genuinely forsee Trump becoming a legitimate president, even in the loosest possible sense of the word? Perhaps we truly have entered to Twilight Zone because there really is no other explanation. So I guess being overthrown by super-smart robots wouldn’t be the most embarrassing thing to happen to the human race.
Let’s explore this in a bit more detail through a jumble of ramblings, pop-culture references and genuine published research…
I, like many others, trust technology rather intrinsically. I grew up immersed in it, and have more or less never lived in a world without a computer, whether it was the huge old whirring Apple macs when the logo was still formed of colourful stripes and world domination by way of iPhone wasn’t so apparent, or a clunky PC which was confined to a corner in our house and was only permitted to be switched on for short periods of the day for fear of the dial-up bill being too high.
But now it’s a fraction of the size and by my side at all time, and as some might claim, recording my everyday ramblings – which must suck! Imagine being a super smart computer that is only exposed to cats, Asos and Gaming vids on YouTube. Technology is now a completely different beast and it basically governs our lives. However, it’s not such a bad deal, as it makes out lives whole-heartedly easier and better, despite what the naysayers might cry and no matter how often the Daily Mail claims that it’s made us into mindless, entitled snowflakes…
One of the key concerns is that developments within automation could result in the mass extinction of jobs for humans, with fewer roles created than rendered obsolete when it becomes mass spread. The reality is that yes AI might knock out certain jobs, with those in the immediate firing line primarily centred around driving services, translating languages, dispensing food and drink and carrying out assembly-line tasks.
However, when we consider these tasks we must remember that machinery will require upkeep, maintaining, programming, building and eventually upgrading and replacing, so it could be a case of individuals who are rendered somewhat ‘jobless’ by the technology, are retrained in different specialisms and develop skills which will ensure they’re still employable in a world where AI is overall more prevalent and capable.
Too many people have mistakenly made the presumption that AI will only affect ‘blue-collar’ workers, however, it has been widely documented that this is, in fact, not the case. No single demographic of people will be affected, instead, AI will alter the landscape of the workforce entirely. It also almost definitely means that schools, colleges and universities will have to change up the subjects they teach and the career paths that the workforce of tomorrow will take, in order to maintain relevance and ensure leavers have tangible skills.
For the majority of us, we probably don’t have a whole lot to worry about right now, so long as we keep up to date with the developments and understand how we can find a way to elevate or change our skill set to ensure that we can compete in a world with widespread AI. Overall probably not quite time to sharpen your pitchforks and hold a town meeting… yet.
If you’ve seen Upgrade, a recent release brought to us by the director of Saw and Insidious, you’ll probably now live with a slight concern of the possibility of having a highly intelligent AI chip inserted into your brain, should you become paralysed and fall in with the wrong crowd that is. Not only this, but this ‘auxillary brain’ may have the potential to become sentient, control your body and turn you into a serial killer… which is probably even less likely.
In truth, there are technically cyborgs already in existence, with some people living with biomechatronic body parts supporting their organic bits and pieces in order to function in their everyday lives. From prosthetics to implants, there are humans living with augmentations to improve their lifestyles amongst already.
Amputees around the world have volunteered themselves to help develop exceptional robotic limbs, linked to nervous systems which can be operated just like a normal limb, with the likes of Jesse Sullivan (considered to be one of the worlds first cyborgs) equipped with the technology to even feel hot and cold through his bionic arm.
Other technologies such as electronic eyes which are directly linked to the visual cortex, through a brain implant, have allowed vision to be restored to those who have been rendered blind, such as the case fo Jens Naumann who received the artificial vision system in 2002.
When the news broke last year that the US military was testing mind-control chips on soldiers, which allowed their moods to be altered, opinions were, quite rightly, divided. Using ‘deep brain stimulation’ research claims that the chips can be used to alter an individuals mood. The claim of this technology is that neural implants which are able to generate electrical pulses have the capability to treat mental disorders, including epilepsy, depression, dementia and Alzheimers.
When you put it that way, it sounds like a cutting-edge piece of technology that could change and possibly save lives, however, the entire approach is still relatively Orwellian and certainly has the potential to be exploited for nefarious ends, no doubt.
If you hadn’t noticed, the news often focusses on the bad rather than the good. We also tend to be relatively sadistic as a species too, so we’re just constantly breading this fear and AI is no exception in this. Whenever we talk about AI, so many people immediately jump to the bad, the rise of the robots and the job losses, and however valid some of these concerns will be, (mainly the job losses which will affect everyone from manual labourers to lawyers) there are also a wealth of positives which we must not forget.
There’s actually more content online about the possibility of an ‘AI takeover’ than you’d like to think. Well, that’s debatable as the internet is basically a huge melting pot of conspiracy theorists, incels and cat photos, nothing is surprising anymore.
The hypothesis goes a little something like this; should AI become intelligent and widespread enough, it may exceed that of the human race and as result determine that we are too much of a risk to the planet or a waste of resources – or both – which wouldn’t be completely inaccurate, and find a way to take us out. Superintelligent machines are also likely to be motivated by very different desires compared to humans, with a lack of emotional desire they would probably be motivated to take over the world and destroy humans to increase the number of resources available to them and reduce the risks of any external agents capable of shutting down the machines remaining.
When we consider it in it’s simplest form, a piece of machinery with the core goal to continuously produce as many post-it notes as possible understands that humans are using the resources it needs to make and do other things. This machine is programmed to just make post-it notes and with that as its sole objective, it would see humans as an obstacle to its post-it producing capabilities. This could drive the machine to influence or find a way to eliminate humans so that it can have all of the resources to make it’s post-it notes with no barriers.
Although AI could potentially be equipped with an ethical framework, it’s highly unlikely we could easily programme it to understand common sense, which is one of the few edges we as humans have over the robots – well some of us at least!
Finally, a battle between humans and robots would likely only occur when the AI reaches the same level of intelligence as the human race and then recognises the postulated fact that often two intelligent species will rarely be able to happily coexist in a peaceful manner, particularly in the same environment. We only have to look at human history where we’ve struggled (to say the least) to coexist with other humans – see any example of enslavement and genocide and you’ll get it.
So who’s to say if we create something capable of learning and understanding to the capacity or beyond what we as humans possess, it won’t see the horrible things we do and decide we’re no longer necessary or threat to the planet. I don’t think many of us can say that if we were in that position we wouldn’t get at least a little sick of humankind and at least exact a small amount of Thanos style teachings on them in the name of the necessity of conflict. Of course, fail-safes can and would be implemented which would be designed to prevent any kind of cybernetic revolt, however, the prospect of cyber-terrorism and intervention could break down these safety barriers in one fell swoop.
So this is a biggie, which has been pretty highly publicised recently in the press. Of course, much of the press is scaremongering, (that’s why you definitely need to cautiously pick the titles that you actually take any notice of) but AI has been slated as dramatically increasing the risk of nuclear war, occurring as soon as 2040.
Its development is believed to make the possibility of a nuclear apocalypse that much more possible, whether this is deliberate or accidental. The findings came following the grim conclusion which emerged from a workshop of experts from a range of fields including AI, nuclear security, government and the military, who met to evaluate the impending impacts of AI on nuclear security, across the next two decades.
Although humans are volatile and already have a significant number of wars, including cold war, under our belts, we’re very capable of holding our fingers over the buttons for years on end, typically due to fear of retaliation. The problem arises when AI, through surveillance of adversary’s security infrastructures, identifying patterns in behaviours which otherwise may not have been detected and unravelling intricate details of enemy weaknesses, builds a foolproof system to destroy them completely. With this information, AI could exact much more cleverly calculated strikes which would aim to completely wipe out the enemy’s capability to retaliate.
Moreover, an adversary who is now aware that their position is exposed and vulnerable would be put in a highly complex position, which would push us beyond cold war propositions into real-world-all-out nuclear war territory. They may choose to act before their opponent, as they watch them become more and more powerful, again looking back at history when pre-WW2 German advisors observed nearby Russia emerging as a bigger and stronger superpower called for preventative war.
We’re definitely not in the Danger Zone yet and in fact, there’s plenty of research which suggests that should AI start to get a little too big for its iron-clad boots, we’ve still got a significant number of advantages over them, even as puny squishy meat covered skeletons.
Boxing is one of the key ways in which has been noted as an advantage over preventing hostility from AI. This definitely isn’t a ‘Rockem Sockem’ reference, but actually refers to an attempt to ‘keep the AI in a box’ by limiting its abilities upon creation. This would mean that more cybernetics would be required to complete more tasks, as by limiting it we would see a decrease in their abilities. There’s always the risk of social manipulation to establish freedom from creators, so ultimately there is a degree of human self-control required to not give in to the robots…
Conclusively, I’m not sure we can rely on Asimov’s Three Laws of Robotics (regardless of the fictional connotations of these ‘laws’), we’ve seen some of the worlds greatest minds express concern against developing AI which is too intelligent or to the point where we’re no longer able to control it. Most notably, the late Steven Hawking theorised that it could ‘spell the end of the human race’ should we not learn how to avoid the risks of the development.
For now, we should appreciate that AI is creating amazing opportunities to learn more about the human race, how they work, learn, operate, think and feel, but still maintaining cautiousness in what the future could bring. So there you have it, your toaster or laptop almost definitely won’t turn on you in the coming years (akin to Y2K), however, with a significant number of volatile world leaders with impressive nuclear powers, it might be time to start brushing up on those post-apocalyptic survival skills, which aligns perfectly with the release of Fallout 76 – nice.