In 1906, a book entitled The Law of Automobiles was released by Xenophon Pearce Huddy, a lawyer in the United States. It was early days in the evolution of the automobile and rules and regulations had yet to catch up with this ground-breaking new technology. There were less than 200 000 vehicles on American roads at this point in time, but Huddy was already asking questions about the legal implications around pedestrians, street safety and speeding. He wrote: “…many branches of the law are being affected by the horseless carriage…. Where the automobile’s permeating influence will stop is beyond prophesy. It is certain, however, that the motor car, including everything connected with it, is bound to be the subject of a vast amount of litigation in the future.”
...the implications of robots killing humans on demand...
Fast forward 113 years to Johannesburg, South Africa and delegates attending the Emerging to Converging Technology: The Future World conference were asking much the same question about a range of issues from designer babies, synthetic neurobiology and even rights for robots.
Cara Bouwerrise of self-driving vehicles was also up for discussion, with keynote speaker Dr. Kenneth Oye from MIT asking how human beings might feel when, not if, automated vehicles cause human deaths on the road. In spite of the fact that consulting firm McKinsey & Company has projected that driverless cars could reduce road deaths by 90% in the United States alone, Oye queried how mankind would view those fatalities which will still occur.
“The autonomous vehicles we will be speaking about in a year to two years will be better at avoiding accidents,” said Oye, “but is being better enough?” Nobody could foresee all the implications of the early automobile and, prior to that invention, horses would hit pedestrians on a regular basis, but what was the reaction in 1899 when Henry Hale Bliss became the first pedestrian in the United States to be killed by a motor car? And what, asked Oye, would the reaction be if our new 21st-century driverless technology resulted in the death of a child?
“If you talk about the development of technology, early incidences have an impact,” said Oye. “I would suggest a higher standard than that for new technology. More stringent requirements, because our uncertainty is great.”
Oye stressed the need for scientists and academics, businesses and governments to keep the ethical considerations of technology in mind, and for legislation to be equally mindful of this necessity. “We have a duty to evaluate actions with reference to both legal standards and ethical norms,” he said. “You need to be doing more than mere compliance. Those duties and responsibilities are both good ethics and good business.”
Ethics… or else
To illustrate his point, Oye highlighted several examples where companies failed to pay attention to ethical considerations, environmental consequences, security implications and the prevailing sentiment and paid the price by opening themselves up to risk.
Balancing risk and opportunity are key, he stressed, highlighting the case of AquaBounty, a United States biotechnology firm which lays claim to creating the world’s most sustainable salmon. By modifying salmon genes, the company was able to grow the fish bigger and faster, but they ran into trouble when they took the product to market. “AquaBounty didn’t engage completely with civil society or bring in groups around ethics, and they got pushback,” recalls Oye. “For years, even after [regulatory] approval, they hit consumer resistance. So you have to be attentive to the potential risks of business, or they can bite you on the neck.”
...NGOs wanted to strangle synthetic biology vanilla at birth.
Swiss synthetic biology company Evolva also came up against consumer resistance in 2015 when, spurred on by Nestlé’s commitment to using only natural vanilla, the spotlight turned to the company’s synthetic vanilla bean replacement, vanillin, which is brewed from yeast. “The company ran into intense resistance because NGOs wanted to strangle synthetic biology vanilla at birth,” recalled Oye. “Anything attached to food always comes with huge sensitivity and concerns.”
Similarly, xenotransplantation (such as transplanting organs from an edited pig genome into humans) raises policy issues around informed consent and transplanting organs from one species to another. And using gene-based technology to try and eradicate diseases like dengue fever also come with their own implications. As Oye explained: “We don’t know what will work and what won’t and there is the possibility that the diseases will adapt.”
These sorts of genetic interventions arouse suspicion and fear in the minds of people. In the American state of Florida, for example, British biotechnology firm Oxitec engaged with the community on releasing lab-grown mosquitos to combat infectious diseases. The degree of resistance they encountered saw them abandon the project.
Calling for a code of conduct for researchers, Oye added that the international reaction around genetically modifying genes has been strong and there is a lot of pushback. “We should be engaging with those who might be critics… with civil society, regulators and the Sheldons [theorists and scientists like Sheldon Cooper from the TV series The Big Bang Theory] in the room.”
His final example was the recent case of the Chinese CRISPR babies, where the brains of twins Lulu and Nana were genetically edited before birth to give them immunity to infection by HIV. Chinese scientist He Jiankui has since been suspended and the consequences have been severe, said Oye, pointing out that “the reactions in China have been worse than in the West”.
Additional questions around ethical considerations concerning robots were raised by Queensland University of Technology’s Dr. Peter Corke, who was even drawn into discussing the likelihood of rights for robots. He addressed ethical considerations such as using robot security guards and self-driving vehicles to transport money, adding: “If you had robotic guards then you have the same problem you do with robot armies.” These include the implications of robots killing humans on demand, something the United Nations has already said should be closely monitored.
Governments and governance
All these examples ask questions not only of the ethical implications of technological advancements but about the role of regulation and government policy.
As GIBS lecturer Manoj Chiba observed: “We all love technology but we don’t have a handle on the change.” Neither, it seems, do those who create policy. “The regulator, from a technology space, is about three to five years behind,” was Chiba’s observation.
Corke agreed: “We never regulate until it’s too late and entrenched. Maybe we don’t want to invest the effort until we know this is something people care about. Consider artificial intelligence; some people hold strong views about this being a dangerous technology, others say we shouldn’t worry. But that’s not a very good argument. It’s a problem we know will come, but nobody knows quite how to regulate it.”
What entrepreneurs on the ground need, however, is the flexibility to seize the opportunities without reams of red tape, said Brian Bosire, co-founder and CEO of Kenyan agriculture technology company UjuziKilimo, which helps farmers with crop yield optimisation through soil analysis and farming recommendations. “Government has to be adaptive,” said Bosire. “Technology is exponentially rising and regulation is always following. Government needs to recognise that we need enough room to experiment and solve local needs. But they also need to adapt fast enough so they don’t hold back new sectors. This is one of the biggest challenges government needs to figure out.”
In Africa, added Dr. Bitange Ndemo, chairman of the Blockchain and Artificial Intelligence Taskforce for the Kenyan government, there is an added challenge in uniting Africa into one economic bloc and developing shared standards. “The African Union (AU) is an impediment to Africa’s development,” he said. “If you look at their vision, it’s 2063, by which time everyone will be dead. There is a huge opportunity which the AU can play. We need leadership in the AU to bring Africa together.”
Until that happens, individual countries will continue to carve our legislation on their own; stretching the timeframes and clogging the policy pipeline.
A deeper dilemma
In addition, when it comes to Africa, there may be a bigger more all-embracing ethical dilemma around technology.
In the final session of the day, the panel began to unpack the implications of technology adoption on a continent battling unemployment, inequality and social ills, touching on the ethical considerations of uptake against such a backdrop.
For Corke the debate was simple: “People talk a lot about robots working with people to increase productivity. The addition of robots means your enterprise will achieve more. This is probably the most enlightened way to use this technology. But in Singapore, if you introduce robots that will displace people, you have to keep these people in the firm and reskill them. If machines are going to come in, then you have to do something with those people. That social consideration has to trump everything else.”
These are the challenges facing regulators around the world.
What about privacy?
During a healthcare-focused panel discussion, South African medical doctors Dr. Elton Dorkin and Dr. Evangelos Apostoleris touched on the implications for personal privacy and how policy should handle this sensitive matter. Referencing Kenya, where government, business and civil society have committed to working together to develop a data privacy and protection framework, Dorkin pointed out that “one of the principles was that if innovation is paid for by the public then the public should have access to it”. In other words, systems created with state resources should be put to use for the good of society, with an opt-out option for individuals, of course.
Dr. Bitange Ndemo was well placed to respond to this during the Q&A session following his presentation; highlighting the slow adoption of privacy laws in Africa. “More than 15 years ago we started developing data protection laws in Kenya,” said Ndemo. “We have been pushing to adopt European Union data protection laws, but we still don’t have a data protection law – the challenge is the speed at which technology is moving, which is not compatible with the legislators who are looking at their own selfish interests.”
This fuels a level of distrust in the system and, for individuals, often spurs on concerns around who has access to their personal information. Apostoleris was quick to stress, in the South African context, the importance of compliance with acts such as the Protection of Personal Information Act, especially when dealing with sensitive healthcare information.
KEY TAKEAWAYS
· Regulation, in general, plays catch-up to innovation.
· Faced with new, sometimes ethically ambiguous technologies, should more stringent requirements be put in place?
· Scientists and academics, businesses and governments must keep the ethical considerations of technology in mind.
· Companies that fail to pay attention to ethical considerations, environmental consequences, security implications and the prevailing sentiment, often open themselves up to risk.