The following excerpt is based on the book Tomorrow’s Jobs Today, available at fine booksellers from John Hunt Publishers.
Futurist Roy Amara says that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” In Tomorrow’s Jobs Today we interviewed over twenty of today’s most innovative business leaders, like Dr. Anand Rao, to offer solid perspective on where we are today with Artificial Intelligence, Big Data, Block Chain, Privacy, and the Internet of Things, as well as a near-magical crystal ball into what tomorrow holds.
“Deep learning is not equal to deep understanding. I think we must go beyond it and look at all other forms of learning and intelligence.”
Dr. Rao has over 30 years of experience in behavioral economics, risk management, and statistical and computational analytics. He has co-edited four books, published over 50 peer-reviewed papers, and previously served as the Program Director for the Center for Intelligent Decision Systems at the University of Melbourne. Dr. Rao received his Ph.D. in finance technology from Melbourne University, his master’s from the University of Sydney, and his MSc in computer science from the Birla Institute of Science and Technology.
From the interview
Dr. Rao, how has the relationship between academic institutions and the business world evolved in the era of big data and analytics? Is it a balanced symbiotic relationship, and what are its benefits for startups and students alike?
If you’re talking about the AI and big data world, one of the challenges that academics have in this area is that they’ve always tended to have the thought leadership role and control over the publications and so forth, but they haven’t had the data. Businesses have the data, large volumes of consumer-level data. So, the data is one element.
Also, over the past few years, we’ve seen the need for considerable computing power, which again tends to be less available on the university-side these days and more in the business sector. Business sectors are running large machines with powerful GPUs, quantum computing, all of which is quite expensive for academics to support. That’s been one of the challenges in terms of their research.
In fact, in the big data world, in the AI world, the academics are lagging, and additional levels of investment are needed. In the good old days, there were all these joint efforts in supercomputing, and governments were investing very heavily. Since then, at least over the past couple of decades, most of that funding has pretty much vanished. That type of investment is the same one universities are now asking of businesses. Of course, where the relationship is good, there exist productive symbiotic relationships. Where the companies can provide the data, the academics can provide both the techniques and the people, basically university students, postdocs, to analyze that data.
The Futurist Roy Amara famously noted, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Within the context of the digital disruptions of the last few decades, do you believe his statement still holds up, or do we have a better grasp of where technology is taking us?
In the case of AI, mainly, it’s still very much true. As we know, there is quite a lot of hype around it with everyone saying, “Hey, AI is going to take over. We’re going to lose all our jobs,” which I think, at least in the near term, is not going to happen. There are still many hurdles that need to be overcome, so I don’t think all of that fear is warranted. In the longer term, yes. There is a slow progression of value being added by these machines. So, humans need to position themselves in vocations that are value-add, as opposed to repetitive tasks.
Those roles are where most of the automation is happening. I think everyone needs to ask themselves the question, “If I’m doing something very repetitive or manual, am I adding value here?” So, that’s the key. Because more and more of those responsibilities will be taken over by the machine. At the moment, people shouldn’t be worried, but in the longer term, certainly, AI will undoubtedly completely transform the job market.
You’ve spoken quite passionately on the ethics of emerging technologies. At a recent World Economic Forum meeting, you remarked how AI is a force for good but warned that there remains a lot of fear and misconceptions preventing adoption. What are today’s most problematic ethical obstacles for developers, and how are they overcoming them?
The foremost ethical challenge is the notion of bias and fairness. There are a lot of ethical principles, and we’re all told we should treat every person the same and not be biased either by gender, ethnicity, views, age, all of the different criteria. So, we can say that. But if I’m a data scientist, the question becomes, how do I make sure that I’m not introducing any bias in the way my algorithm is making recommendations? If I am working at a bank and I’m writing a machine learning algorithm that decides the credit limits for individuals, how exactly do I go about doing that? How do I make sure that what I’m doing is not unfair by any scheme of things? There are several instances like that in every industry.
The reason it’s so problematic is that we use historical data for machine learning, and when you use historical data, as in any society, it may be biased in one way or the other, right? Either to specific ethnic groups or genders, and that can all get reflected in the AI recommendation. There have been some very well-known cases, one very recently involving two large players, one tech player and another, a bank. In this example, the male goes in and applies for a payment card, and he’ll get a certain credit limit. But in the case of the spouse, even in a joint account, if the wife applies for the same card, the credit limit is somehow much lower. The question suddenly becomes, why? These stories are now finding their way into the press, and so there’s a lot of discussions to be had around issues of bias.
There are, by some accounts, 32 definitions of fairness. For data scientists, do they even have the time to understand all the implications of each of them on what they’re building? That’s an enormous challenge. Another challenge is, plainly, “How do I explain what my algorithm is doing?” It’s a tall order for certain types of techniques within AI.
Another big issue is safety. Is the AI that I’m building safe? Is it tested under all kinds of extreme conditions? The more sophisticated the system, the tougher it is to test some of these things. So, those represent challenges from an ethical perspective, even though there’s no doubt as to what the ethics are to a large extent. So, while people agree that AI should be safe, it should be explainable. It should be fair. How exactly to do it is by no means close to being solved.
One impact of the adoption of AI is the prospect that companies will need fewer employees to reach or surpass their goals. You’ve said the top three industries ripe for disruption include finance, healthcare, and auto. To what degree will the workforce in those industries be displaced over the next 15 years?
We should look at it, not necessarily as the displacement, but as an augmentation or a change in the job. Yes, there will be a certain number of people who will be displaced, but a majority of them will likely see their job description change. I think, like the previous revolutions that have occurred, AI will ultimately create more jobs, different types of them, which may not even be attributed to a specific AI solution. I don’t see mass-scale unemployment where machines will replace everything. I don’t see that situation coming at all. I expect that people will need to become more comfortable using machines, using recommendations and learning when to use judgment. Deciding, “When do I accept the machine’s recommendation versus when do I go on my intuition and experience?” For example, just because my personalized book-recommender suggests a particular collection doesn’t mean I’m buying each and every book. Right? I use my judgment, read through my options, and then decide. Irrespective of what the system says, I’m still making certain decisions. Over time, using discretion and AI, we’ll get better at what we’re doing.
One of the concepts we use “man-plus-machine” is better than just “either one on their own.” So, in spite of all the machine intelligence, just using a machine isn’t the right thing yet for a multitude of serious decisions that we make. By the same token, just using humans without having some machine aid, in numerous cases, is also not optimal. That’s why the man-plus-the-machine is a better way of approaching it right now.
What or who initially influenced you to pursue a career in data science?
I found my way into AI way back in 1985. I did my Ph.D. at that time, completed my computer science degree, and the only thing I could think of was wanting to get into AI. I was just fascinated by the human thinking process and being able to mimic that in a machine.
What sage advice might you have for those drawn to AI domains like machine learning, robotics, and neural networks? How do they break-in to an industry that seems so daunting and sophisticated?
I guess for the AI data scientists, the first advice I would give is deep learning is not the same as deep understanding. And I know there’s a lot of excitement around deep learning, but I think we must go beyond it and look at all other forms of learning and intelligence. The way deep learning traditionally works is based on what patterns you can draw from the data. That’s just one way that we learn, and we also learn in other ways. And not enough work is being done in other ways of learning. So, continuous learning. Learning at the symbolic level. Learning not just from patterns, but by inference. I’m sure that at the highest level, even some of the leading researchers of deep learning are very conscious of some of the drawbacks, and are trying to address that. Also, as an AI scientist, you need to be open-minded in embracing different things, to be able to move forward in the creation of AI. So, that would be one piece of advice.
For someone wanting to break into the industry, from a business point of view, there are some easy things that you can do with AI to give you or your business a big return on investment and lead to career growth. We call it using “cool” AI to solve boring problems. What I mean by boring problems is the back office. With most businesses, there’s a lot of invoices, and there’s a lot of text documents. People are just going through them, extracting information. That’s a very tedious task or a boring task if you like. AI can help a lot in those areas. It can remove all the drudgery. There is a lot of it left in the service economy as well that AI professionals can help remove. Of course, removing that drudgery means you then need to start adding value, rather than just replacing mechanical tasks. But, once you accept that challenge, I think there are indefinite opportunities to start doing more exciting things in the space.