Darius is a former high school literary and feature writer with a Bachelor of Science degree in Information and Communications Technology.
The Next Big Thing
Tiny sparks of curiosity within our minds suddenly lits into a roaring flame the moment we've read or heard the terms artificial intelligence.
We grew up with renowned entertainment pieces and pop culture media products that suffices this, correlating and inspiring art forms and franchises like "The Terminator," "Westworld," "I Robot," "Bicentennial Man," "Ghost in the Shell," "Blade Runner," "Alita: Battle Angel," and "Matrix" usually depicting a near or far-flung future run on and dominated by AI robotics and technologies. Perhaps one of my major technological inspiration was Steven Spielberg's movie "AI Artificial Intelligence."
However, the AI of today are no longer in the binds of science fiction, creative limitations, and human imagination as it is able of breaking through the 4th and 5th industrial revolution of the world. It's here, and it's coming, yet it's going to need a lot of understanding from the masses.
Artificial intelligence will add 13 to 16 trillion dollars worth of GDP to the global economy by 2030 according to 2018 researches and studies. Most industries of today, such as health care, finance, and automotive often adopt artificial intelligence making it the next biggest commercial and industrial opportunity for our world's fast and evolving economic climates.
But even with these mere multiple intricate studies about the subject, AI, like most newly societal-integrated sciences and technologies, often and will often face a lot of problems and confusion as to whether what it is, how it works, and how to apply it in reality.
The very definition of it is uncertain since the topic is overly extensive and mind-bogglingly complicated. And I guess a simpler interpretation of it is an intelligence that is artificial integrated within a convuluted system that works to solve a certain problem.
But what defines something to be artificial, and what defines something as to having intelligence? Furthermore, what defines something that has an artificial intelligence? These are some quasi-philosophical questions that run amok the science and technology community, each having a different variation of interpretation than the last.
Despite all this, the present's AI have thrived because of how the way past learners have used it, how they it can be applied in real-world settings, what are the multiple setbacks, and what are the small victories it has gained.
Nevertheless, what is AI? How does AI work? And what are the realistic points of view about it that everyone should know?
In A Nutshell: A Blast From the Past to the Boom of the Future
The creation of the first computers during the 19th century and the creation and propagation of digital computers from the 1940s gave rise to new technologies that would technologically be able to perform and carry out complex tasks and commands. This also gave rise to diverse human-like and human-motivated commands such as the creation of search engines and handwriting digitalization. Still, there were no pre-existing technologies that could be able to accurately match the human intelligence. The sheer complexity of human intelligence and human behavior simply cannot be certainly measured by mere numbers and codes. Artificial intelligence offered answers to most of these previously existing problems by creating systems able to at least mimic the human neural capacities.
Artificial intelligence is the ability of a computer, software, or computer-controlled hardware to perform and execute non-error commands, decisions, and tasks like humans. In short, it is a broad science of technology of mimicking or imitating human-related tasks, human-driven objectives, and the diverse complexity of the human mind.
A self-driving car, for example, can safely drive through a road without having the need of a passenger to control it. All it needs for it to work, though, are a handful of carefully selected data and datasets that will be used to be processed by the AI softwares installed within self-driving car. Of course, not all of these processes are peach perfect in execution. Despite multiple backlogs, this also applies to automation of industrial factories, concise diagnosis of preventable diseases, helping the agricultural industry, installation of AI software within hardware to help you in certain tasks, and creating new jobs for companies that wish to have AI implemented in their businesses.
Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed.
— Arthur Samuel, 1959
AI has two distinct kinds:
- Artificial Narrow Intelligence (ANI).
- Artificial General Intelligence (AGI).
ANI implements a limited part of the human mind focused on narrow tasks while AGI is a hypothetical ability of a technology to, basically, do anything a human can do. To give you a visual representation aid, try imagining the ANI as a smart speaker or a word auto-complete system while AGI as the almost science fiction robots you see in big-screen movies.
The technology of AI's two important methods of learning are:
- Machine Learning (ML).
- Data Science (DS).
ML focuses on the technology's data analysis that also improves drastically through the use of data. DS focuses on the careful usage of mass volumes of data by deriving information, solutions, and decisions that will be used later.
To differentiate the two, in a nutshell: machine learning is when input data A becomes output data B. One example is the use of audio data to text transcript data, ergo creating a system that can do speech recognition. Data science is more about creating solutions from data. For example, product A costs more and lasts for two years but uses small amounts of raw materials while product B costs less and lasts for three years but uses more raw materials. A piece of definitive information from the given data will then a possible solution for the problem.
More parts of AI resembles processes involving big data, deep learning, and neural networks. These are usually implemented within complex AI technologies, helping the AI to work at its full efficiency.
The Problems: Well, Some of It
1. The Data
Data can be acquired from multiple sources and by various amounts of methodology. The problem begins, however, with the data itself. You see, data can be messy. Try imagining a thousand-piece jigsaw puzzle and each jigsaw piece is a piece of data. One may fit on an empty hole of a bigger picture, but one can also be unfit for it. One piece can be misplaced, therefore jeopardizing the whole puzzle and some may even be duplicates. If you think data is already that messy, try imagining how messier it would be once these data are processed. Information, as the output of processed data, can also be as complicated as its former raw data.
But what gives to this technology centering the focus on data?
Data plays a lot of important roles for AI technologies, and the more of it is the merrier. It's basically the blood cells circulating vessels for the whole system to work. Data can be anything: personal, financial, economical, statistical, etc.
Although, one should not over-invest in having too much data. The rampant use and misuse of data can exist and almost often hides even under one's intense analytical supervision. Expecting a team to use heaping loads of data and assuming that these massive amounts of data are valuable can also be a pickle. The type of data should also define if it's unstructured or structured. Data on datasets may have incorrect labels, missing values, or even unknown values that could render it unusable, therefore wasting time and resources that should have been allocated for another task.
2. The Industry
AI will soon be implemented in future generations of companies and new commercial opportunities will soon open, as well. The pervasive automation will transform any industry by having more divisions and roles of labor. This means that there will be less cost for other factors like operations, distributions, and more. But also means that people behind these tasks should at least be knowledgeable on how AI works, better if they're going to pursue any engineering course that'll tackle AI solving complex problems.
The company should carefully consider the construction of unified data warehouses and its effective strategies for acquisition. Despite all of these, a company with deep learning capabilities doesn't always equate to it being an AI company. Manual operations of certain parts should still exist because that's just how someone gets a job done. To put it in an analogy: you can't turn the lights on and off without pressing its switch. You can build a robot to that, of course, packed with simple instructions to turn the lights on and off. But what about the duration of the light being turned on or off, and what about if the light bulb became faulty?
There are some things that machine learning and data science can and cannot do, and these tend to work poorly if not studied attentively and constructively.
3. Distress in Occupations
The fear and concern for overtaken jobs to be replaced by advent technological advances is reasonably understandable. A few, or even many, people are afraid that AI will completely take over other people's jobs to which the industries they're working in are better off without manpower. And this is a big bump to flatten out, especially for those that are very new and aren't well-versed for the subject. But to answer that question:
Although having a long history, and much successful run of progress, an AI cannot almost do everything a human can, or at least not for the next couple of hundreds of years.
For example, a human can do multitude of mundane tasks such as cleaning the carpets, washing the dishes, dusting the cupboards, etc. An AI can at least do one or only one of these tasks. And even if that happens, a lot of hard work must still be put in for the AI technology to do its task as effectively as possible. The AI should learn how to, for example, apply the right amount of pressure or know which direction it should clean. Programming and having it work error-free will be a headache (literally in some cases). In short, the notion of AI taking over one's job is highly improbable even for our current generation. It may happen soon, but the assumption of it will be in a few more hundred years. As much as it can revolutionize the world in a lot of ways, an AI can only do "simple" concepts unless a newer wave of technological advance pushes it to take on multiple tasks.
4. An Optimistic Viewpoint
A brand new kind of technology wave brings all kinds of expectations from people, even those who wish to exploit it. Some people may think that AI can be developed to be sentient enough for it to have super-intelligence, concluding the world is riddled with a plague of humans and decided that it must eradicate all life on earth. If you've managed to grasp at least how AI works, and its fundamentals, this sort of irrational fear comes from nothing since, well, AI simply cannot (and can never) do that. The expectations of technology rising up to its creators will only create fear for future inventors that actually want to help the world make a better place.
5. A Pessimistic Viewpoint
On the other hand, other people may think that if AI cannot do everything then is it even worth it? AI cannot do everything, but it can do anything. The human imagination is just one of its limits, and it can surely be applied to any imaginable and solvable aspect. For example, solving 50-year-old biology "grand challenge" decades before experts predicted, or painting a holographic universe using AI. Gradually advancing AI technology would not be the key to solve most of the world's problems, but they may open the door that'll lead to some paths in order for these complex problems to be solved.
5. Ethics and Standards
Ethics of and in AI, learning and using it, can also be a headache. Human-related and technological-related ethical standards that would be imposed should AI possibly takes over multiple facets of human life would need to be critically addressed, as well. Pitching and creating solutions for these ethical solutions will be almost the same ethical standards the modern world has been trying to impose for the internet and its users.
Along with the employment issues that had been discussed, privacy and surveillance also come along. These will be prone to new kinds of adversarial attacks and security will also be another priority. One issue that stands out of all of these that also needs to be taken into account is the bias of implemented decision systems. New laws will be made and be delivered upon, debating on topics of how an AI and its creators deserve certain rights or not. And what if an AI suddenly becomes too sentient, too intelligent, and too human? The compound effects of AI for the world will be another multi-edged sword sharp enough to cutting through multiple problems and providing answers, as it can also be used for creating more.
Keeping Grounded on Reality
Even so, putting the AI science on a pedestal to solve everything with a snap of a finger with it magically working on the first test runs, or in its multiple test runs, is a definite "must not." Even if it offers various amounts of help, this technology has its wide range of limitations, and it must always be in mind to have it grounded in reality. Planning its development accordingly in an iterative process is an effective suggested methodology, but the scarcity in machine learning engineers and talents will be a problem if they're bombarded with AI projects especially if that doesn't align with their field of work. Nevertheless, building magnificent teams of them would help the creation and innovation of AI projects while pairing them with those versed enough within the corporate world. And lastly, having the AI engineering teams held atop a pedestal to work on projects won't help with the overall project to succeed. Working with them by contributing as best as you can, and by keeping them on track of their constructed plans, goes a long way.
Artificial intelligence will mold the next future generations into highly commercialize and technological societies. But as we sit back, wait and watch for its course of development to completely unfold, people aiming to learn wholly must have the capability on how to apply it positively in our daily lives as well as on the effectiveness of how it works.
- AI, machine learning, and data science courses that one can take from deeplearning.ai and Coursera lessons by Andrew Ng.
- Artificial intelligence article from Brittanica Encyclopedia.
- A 2019 article from Forbes about Artificial Intelligence.
- What is Machine Learning and why it matters?
- What is Data Science and its importance?
- 50-year-old Biology Grand Challenge article from Science Alert.
- Painting a Holographic Universe article from UC San Diego.
- Ethics of AI and Robotics article by Stanford Encyclopedia of Philosophy.
This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional.
© 2021 Darius Razzle Paciente
Umesh Chandra Bhatt from Kharghar, Navi Mumbai, India on April 27, 2021:
Well analysed. Nice.
Dilawar Mansoor from Ahmadpur East on April 27, 2021: