Darius is a former high school literary and feature writer with a Bachelor of Science degree in Information and Communications Technology.
The Next Big Thing
Tiny sparks of curiosity within our minds suddenly lits into a roaring flame the moment we've read or heard the terms artificial intelligence.
We grew up with renowned entertainment pieces and pop culture media products that suffices this, correlating and inspiring art forms like The Terminator, Westworld, I Robot, Bicentennial Man, Ghost in the Shell, Blade Runner, Alita: Battle Angel, and Matrix usually depicting a far-flung future run on and dominated by AI robotics and technologies.
However, the AI of today are no longer in the binds of science fiction and human imagination, and is able of breaking through the 4th and 5th industrial revolution of the world.
Artificial intelligence will add 13 to 16 trillion dollars worth of GDP to the global economy by 2030 according to 2018 research and studies. Most industries of today, such as health care, finance, and automotive often adopt artificial intelligence making it the next biggest commercial and industrial opportunity for our world's fast and evolving economic climates.
Ebut even with these mere multiple intricate studies about the subject, AI, like most newly societal-integrated sciences and technologies, often and will often face a lot of confusion as to whether what it is, how it works, and how to apply it in reality.
The very definition of it is uncertain since the topic is overly extensive and mind-bogglingly complicated. And I guess a simpler interpretation of it is an intelligence that is artificial. But what defines something to be artificial, and what defines something as to having intelligence? Furthermore, what defines something that has an artificial intelligence? These are some quasi-philosophical questions that run amok the science and technology community, each having a different variation of interpretation than the last.
Despite all this, the present's AI have thrived because of how the way past learners have used it, how they it can be applied in real-world settings, what are the multiple setbacks, and what are the small victories it has gained.
But what is AI? How does AI work? And what are the realistic points of view about it that everyone should know?
In A Nutshell
The creation of the first computers during the 19th century and the creation and propagation of digital computers from the 1940s gave rise to new technologies that would technologically be able to perform and carry out complex tasks and commands. This also gave rise to diverse human-like and human-motivated commands such as the creation of search engines and handwriting digitalization. Still, there were no pre-existing technologies that could be able to accurately match the human intelligence. The sheer complexity of human intelligence and human behavior simply cannot be certainly measured by mere numbers and codes. Artificial intelligence offered answers to most of these previously existing problems by creating systems able to at least mimic the human neural capacities.
Artificial intelligence is the ability of a computer, software, or computer-controlled hardware to perform and execute non-error commands, decisions, and tasks like humans. In short, it is a broad science of technology that mimicks human-related tasks and the human mind.
A self-driving car, for example, can safely drive through a road without having the need of a passenger to control it. All it needs for it to work, though, are a handful of carefully selected data and datasets that will be used to process by the AI software of the self-driving car. This also applies to automation of industrial factories, concise diagnosis of preventable diseases, helping the agricultural industry, and creating new jobs for companies that wish to have AI implemented in their businesses.
Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed.
— Arthur Samuel, 1959
AI has two distinct types: Artificial Narrow Intelligence, or Weak AI, and Artificial General Intelligence. The former implements a limited part of the human mind focused on narrow tasks while the latter is a hypothetical ability to, basically, do anything a human can do. To give you a visual representation aid, try imagining the ANI as a smart speaker or a word auto-complete system while AGI as the almost science fiction robots you see in big-screen movies.
The technology of AI's two important methods of learning is Machine Learning and Data Science. The former focuses on the technology's data analysis that also improves drastically through the use of data. The latter focuses on the careful usage of mass volumes of data by deriving information, solutions, and decisions that will be used later.
To differentiate the two, in a nutshell: machine learning is when input data A becomes output data B. One example is the use of audio data to text transcript data, ergo creating a system that can do speech recognition. Data science is more about creating solutions from data. For example, product A costs more and lasts for two years but uses small amounts of raw materials while product B costs less and lasts for three years but uses more raw materials. A piece of definitive information from the given data will then a possible solution for the problem.
The Problems: Well, Some of It
1. The Data
Data can be acquired from multiple sources and by various amounts of methodology. The problem begins, however, with the data itself. You see, data can be messy. Try imagining a thousand-piece jigsaw puzzle and each jigsaw piece is a piece of data. One may fit on a empty hole of a bigger picture, but one can also be unfit for it. One piece can be misplaced, therefore jeopardizing the whole puzzle, some may even be duplicates. Data plays a lot of important roles for AI technologies, and the more of it is the merrier. It's basically the blood cells circulating vessels for the whole system to work. Nevertheless, one should not over-invest in having a lot of it. Use and misuse of data can be rampant and almost often hides even under one's analytical supervision. Expecting a team to use heaping loads of data and assuming that these massive amounts of data are valuable can also be a pickle. The type of data should also define if it's unstructured or structured. Data on datasets may have incorrect labels, missing values, or even unknown values that could render it unusable, therefore wasting time and resources that should have been allocated for another task.
2. The Industry
AI will soon be implemented in future generations of companies and new commercial opportunities will soon open, as well. It will have pervasive automation and will transform an industry by having more divisions and roles of labor. The company should also carefully consider the construction of unified data warehouses and its effective strategies for acquisition. Despite all this, a company with deep learning capabilities doesn't always equate to it being an AI company. There are some things that machine learning and data science can and cannot do, and these tend to work poorly if not studied attentively.S
3. Occupational Distress
The fear and concern for overtaken jobs to be replaced by advent technological advances is reasonably understandable. A few, or even many, people are afraid that AI will completely take over other people's jobs to which the industries they're working in are better off without manpower. And this is a big bump to flatten out, especially for those that are very new and aren't well-versed for the subject. To answer that question:
Although having a long history, and much successful run of progress, an AI cannot almost do everything a human can, or at least not for the next couple of hundreds of years.
For example, a human can do mundane tasks such as cleaning the carpets, washing the dishes, dusting the cupboards, etc. An AI can at least do one or only one of these tasks. And even if that happens, a lot of hard work must still be put in for the AI technology to do its task as effectively as possible. The AI should learn how to, for example, apply the right amount of pressure or know which direction it should clean. Programming and having it work error-free will be a headache (literally in some cases). In short, the notion of AI taking over one's job is highly improbable even for our current generation. It may happen soon, but the assumption of it will be in a few more hundred years. As much as it can revolutionize the world in a lot of ways, an AI can only do "simple" concepts unless a newer wave of technological advance pushes it to take on multiple tasks.
4. An Optimistic Viewpoint
A brand new kind of technology wave brings all kinds of expectations from people, even those who wish to exploit it. Some people may think that AI can be developed to be sentient enough for it to have super-intelligence, concluding the world is riddled with a plague of humans and decided that it must eradicate all life on earth. If you've managed to grasp at least how AI works, and its fundamentals, this sort of irrational fear comes from nothing since, well, AI simply cannot (and can never) do that. The expectations of technology rising up to its creators will only create fear for future inventors that actually want to help the world make a better place.
5. A Pessimistic Viewpoint
On the other hand, other people may think that if AI cannot do everything then is it even worth it? AI cannot do everything, but it can do anything. Imagine is just of its limits, but it can surely be applied to any aspect imaginable. For example, solving 50-year-old biology "grand challenge" decades before experts predicted, or painting a holographic universe using AI.
5. Ethics and Standards
Ethics of and in AI, learning and using it, can also be a headache. Human-related ethical standards that would be imposed should AI possibly takes over the world would need to be critically addressed, as well. Pitching and creating solutions for these ethical solutions will be almost the same ethical standards the modern world has been trying to impose for the internet and its users. Along with the employment issues that had been discussed, privacy and surveillance also come along. These will be prone to new kinds of adversarial attacks and security will also be another priority. One issue that stands out of all of these that also needs to be taken into account is the bias of implemented decision systems. New laws will be made and be delivered upon, debating on topics of how an AI and its creators deserve certain rights or not. And what if an AI suddenly becomes too sentient, too intelligent, and too human? The compound effects of AI for the world will be another multi-edged sword sharp enough to cutting through multiple problems and providing answers, as it can also be used for creating more.
Keeping Grounded on Reality
Even so, putting the science on a pedestal to solve everything with a snap of a finger with the AI magically working on the first test run is a definite "must not." This technology has its wide range of limitations, and it must always be in mind to have it grounded in reality. Planning its development accordingly in an iterative process is an effective suggested methodology. Scarcity in machine learning engineers and talents will be a problem if they're bombarded with AI projects, especially if that doesn't align with their field of work. Building magnificent teams of them would help the creation and innovation of AI projects while pairing them with those versed enough within the corporate world. And lastly, having the AI engineering teams held atop a pedestal to work on projects won't help with the overall project to succeed. Working with them by contributing as best as you can, and by keeping them on track of their constructed plans, goes a long way.
Artificial intelligence will mold the next future generations into highly commercialize and technological societies. But as we sit back, wait and watch for its course of development to completely unfold, people aiming to learn wholly must have the capability on how to apply it positively in our daily lives as well as on the effectiveness of how it works.
- AI courses that one can take from deeplearning.ai, and a certificate that can be achieved if learned through Coursera by Andrew Ng.
- Artificial intelligence article from Brittanica.
- A 2019 article from Forbes about Artificial Intelligence.
- What is Machine Learning and why it matters?
- What is Data Science and its importance?
- 50-year-old Biology Grand Challenge article from Science Alert.
- Painting a Holographic Universe article from UC San Diego.
- Ethics of AI and Robotics article by Stanford Encyclopedia of Philosophy.
This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional.
© 2021 Darius Razzle Paciente
Umesh Chandra Bhatt from Kharghar, Navi Mumbai, India on April 27, 2021:
Well analysed. Nice.
Dilawar Mansoor from Ahmadpur East on April 27, 2021: