Button TextButton Text
Download the asset

Knowledge-First AI, GPT3, & More

Knowledge-First AI, GPT3, & More

The Integration of Human Knowledge in Technology

The world is changing at an exponential rate, including the way we interact with computers. From the 1970s onward, we saw a rise in personal computing, soon followed by server farms. Eventually, cloud computing took over, which made it possible for anyone to access their data from anywhere in the world at any time. Then came neutral networks – computers designed to mimic human brain function – that developed into deep learning algorithms and led us into an era where deep learning and artificial intelligence are no longer just buzzwords but practical applications used by individuals and businesses every day. 

In episode 86 of the Feedback Loop Podcast: "Knowledge-First AI, GPT3, and More," entrepreneur and technologist, Christopher Nguyen focuses on exploring the many facets of artificial intelligence, including his ideas around knowledge-first artificial intelligence (AI), bad training data for AI, issues with the black box, Generative Pre-trained Transformer 3 (GPT3), deep fakes and more.

As we said, the world is changing whether we like it or not. So let's first dive into the ripple effect of change.

From manufacturing and making chips and transistors, all the way to creating software, we have now begun to scratch the surface of touching human intelligence through our technology. For many, this innovative progression seems normal as we have always been augmenting ourselves in new ways over the years using technology that has already existed. However, Nguyen highlighted that now that our range of technology can toggle the line of human intelligence, we may be on the brink of something very different and a little intimidating.

"There is something very qualitatively different and powerful but also very disturbing when we think about augmenting our minds with technology that may possibly be smarter than us." 

How is AI changing?

Over time AI is progressively getting more sophisticated, and it was only a matter of time until we began to make a few adjustments. As the CEO & Co-Founder at Aitomatic, Inc.– the world's only Knowledge-First AI Engine for Industrial AI– Nguyen centers his efforts on AI Engineering as a response to such an exponential progression of technology. Focused on a Knowledge-first AI, Nguyen conducts his work in the same way as the digital industry – which largely predicts clicks on ads or behavior on social networks, only now there's a twist on a solution – Applying human knowledge domain expertise on things outside of his database. 

"You can have terabytes of data, but it doesn't contain the expertise that that your engineer or even your user has accumulated in the industry over the last 20, 30 years in their brain. So now the first day is about the combination of human knowledge and data to build better models than predictive models than you could do alone with data." -Christopher Nguyen

The knowledge-first AI approach is proven as a concept in the material space. Nguyen gave an excellent example of this regarding Tesla. He detailed that when driving a Tesla, or any self-driving car, you can pretty much count on the fact that it is driving itself 90% of the time if you have the latest version, a fully self-driving beta version. There is a lot of human knowledge built into those systems to operate as they should; without it, it would only be a concept. That is the innovation that is taking place today in companies across the world. Now, with such technological innovation, it begs the question of where we stand when it seems the world can move on without us.

Will Human and AI collaboration making humans obsolete?

No one truly knows what will happen in the future. However, one thing is certain, technology was created to be and will always exist as an ever-evolving tool for humans to navigate the world. Of course, just like any tool, there can be dangers involved. However, the power of a tool is not inherent within the tool but in where it is used and the scale in which it is applied. Nguyen, for example, detailed our use of the internet and its potential for reach. He described that the internet was, for the first time, a way in which we could reach 7 billion people compared to the past, where the reach was exponentially smaller. So, reach alone is to be respected, if not feared, for what it could become and how it could evolve. To counteract this, we must make intentional decisions and solutions to have technology move in calculated directions rather than letting it flow into potentially wrong directions.

To assume the potential that humans can one day be obsolete might be favorable for humanity as our emotional response to those thoughts keeps us improving ourselves. But, as Nguyen highlighted, we have always found ways to augment ourselves, and technology has always augmented our abilities. So what we see today, like GPT, large language models, multimodal applications, etc., can make us more powerful than before, just as technology has done throughout the years. 

And just as we use technology to augment ourselves and improve on the things we do, technology needs to maneuver to correspond to human nature as well.

How are we navigating a path for technology?

The approach to the human brain and computer interface needs intentional steering, and without it, technology will naturally end up somewhere we never intended it to be. According to Nguyen, there are two types of steering – high resolution and low resolution. We need to be careful when maneuvering something we know little about, like the human mind. High-resolution steering must be exercised with caution as it leaves little room to think about potential outcomes that may manifest from certain decisions. Low-resolution steering gives us more freedom and space to notice the "iceberg" ahead by acknowledging potential risks for danger or mistakes before we make certain decisions. Much like the human mind, there are things we don't always understand about technology. So, cautious steering is how we can forge a path for the inevitable evolution of something like a black box.

Still, sometimes even with the best guidance, nothing is ever truly perfect. Things like deepfakes, for example, are of concern as technology progresses. Therefore, adaptation is necessary. We always judge AI to a standard that goes well beyond where we hold our human counterparts, including those who are far more capable of destructive behavior. Thinking about AI's progression and adaptation is interesting as it increases our awareness of just how bad some of our human behavior is in reality. The line between the gradual evolution of humanity and behavior as it corresponds to technology and the natural progression of technology alone is something to bear in mind when steering and blazing a trail for technology and adapting to where it is going.

Humanity will infinitely need to adapt as technology adapts. However, Nguyen poses the question: "What happens if technology adapts faster than our biological rate? What does that mean?" 

And now that technology has become more integrated with human knowledge and behavior, it is no longer a separate topic regarding discussions about ethics. 

What are the ethics behind technology?

The topic of ethics, in some cases, can be one with many gray areas. So, for the progression of technology to continue in a controlled and intelligent way, there needs to be a thought process in place when having conversations on how ethics influences technology. To narrow this topic, Nguyen highlighted the separation between intent and impact. For example, sometimes, people don't intend to do bad things but end up having a negative impact. Sounds like a familiar consequence of the human condition. To keep in line with the ethical boundaries of humanity, sometimes building in a bit of the human condition into our technology isn't always negative. For example, when certain technologies are created, preparing data with biases is often a good thing. Most of us sometimes like to assume that the term "bias" is negative and something we've always tried to separate from the human experience. In fact, machine learning would not work without driver bias. It needs to know that human knowledge in order to learn from the world around it.

Also, when building out technology, one must remember the responsibility one has to how that technology will or may be used. Nguyen gives an example that "guns don't kill people; people kill people." The ethical responsibility of those creating and developing technology must understand and bear the consequences and the cost of progress and innovation. And when exploring the world of ethics, those creating technology also have to have some level of education to make decisions that are best for those who will be influenced by it. 

A deep educational background in a specific area of expertise is paramount regarding technology, right?

How important is data variance?

A well-rounded education was and will always be the cornerstone of the betterment of humanity. However, oddly enough, it seems as though we are slowly moving away from a type of humanity based on a well-rounded education and leaning more into situations where people are exploring and managing technology without a broad humanities education. People are creating code, for example, that reaches billions without thinking about philosophy or deeper issues before releasing technology into the world.

With his emphasis on education, Nguyen highlights the launch of his university, Fulbright University in Vietnam, which is part of the Fulbright Program at the Kennedy School. Interestingly, it specializes in the liberal arts with strong science and engineering foundations. He claims that over the next 10-20 years, the graduates from the institution will be leaders of society, business and industries. The value of a liberal arts education with a strong foundation in STEM is undeniably invaluable.

To reiterate the importance of having a well-rounded education, Nguyen referenced a literature professor at UCLA explaining the value of literature that resonated with him. "What literature gives you is the ability to step into the mind of other people and other cultures and experience things that otherwise would be unreachable to you, and thereby become a better person." Likewise, exploring technology allows us to experience a myriad of things we otherwise wouldn't from people and places all around the world. That kind of power should be handled with a well-rounded education and a lot of care.

As someone who has lived in many countries, was a refugee when he was a child and has encountered a spectrum of different people in his lifetime, Nguyen noted that diversity has undeniably helped him in his life. We are now seeing the same pattern happening with machine learning today. Data variance is a positive thing and truly necessary.

The human brain is potentially the best machine learning algorithm and the best inference model. Therefore, having a well-rounded education can help people become better engineers using the model of data variance. This is also similar to multilingual people. They know that learning a new language can make them better at the languages they already know, and we already see such things in language models happening now.

On the other hand, some may think that some of our most advanced machine learning and tools, like GPT3, still work very much like a black box AI. Therefore, regardless of our educational background or model of the world, we aren't able to shape too much of what is going on in the black box. Referring back to Nguyen, this is all about the discipline of alignment. In the same way you would align your own child with your values, for example, we can align our technology the way we intend. If we build technology and let it run wild, it will end up in a random direction we never intended. The ethics of the discipline of alignment are also one to bear in mind as the field of alignment is still emerging.

Still, with a foundation of knowledge, there will always be things beyond our scope of understanding, such as a black box.

Can the black box ever be solved?

In science, computing and engineering, a black box is a system that can be viewed in terms of its inputs and outputs and is almost always blindly accepted without any knowledge of its internal workings. For example, we have created inference algorithms, and although we may not know the ins and outs of them, we know that it is clearly multiplying numbers and so on. So in that sense, it's completely understandable and a "white box" to an extent. However, we don't know how this intelligence has emerged and where it is going. At some point, Nguyen explains, we have to leap, which is, in some way, the equivalent of a black box at some level. We often say, "I just know this thing works, and I'm going to use it. I'm going to use it for my homework. I'm going to use it for my proposal." Without even thinking about it, we put trust into things we often don't understand. So understanding and accepting that the black box cannot yet be understood, maybe then we can shift our focus back to the question of alignment and the intent of our technology. This also brings us back to our sense of consciousness. If we can't introspect, then how can we inspect these machines and say if they do or do not have certain intent?

Can we learn something new from something old?

Working similarly to a black box, regarding GPT3, there is a question of if we think it would be possible to learn where answer comes from in terms of the data set it most heavily pulls on. We've all seen or have done it before, using old information to gain knowledge and then using that knowledge to create something new. As mentioned by Nguyen, this practice is basically stealing somebody's art and style and creating a new piece of art, using someone else's work as the prime data they were trained on. 

This is perhaps where creativity is born; at the intersection of two unrelated things and combining the two ideas together. And crediting the sources of knowledge relates to how much was stolen or used. So the ability for machines to combine two unrelated concepts and manifest something new is revolutionary. However, content that already exists belongs to someone, right? So what does it mean to build on something that already exists, and what does that mean for the person who owns the original work?

So we reconsider intellectual property?

Taking someone's work, building on it and making something new begs the question: At what point will we have to reconsider intellectual property? Intellectual property is incredibly nuanced that, according to Nguyen, moving forward, intellectual property will not be based on source code but instead on the models themselves.

We now live in a world where companies know they don't have to be the best at learning algorithms and even public data sets. Companies are now viewing their intellectual property as two things: Data sets being unique and domain knowledge. That is where people innovate, not by building off of shared models. With building off shared models, someone will just make a different algorithm, so it's not valuable anymore.

So what do we want to protect? Our knowledge – we will always come back to knowledge. Whether embedded in models or experts or workers, that is what will be unique as we move forward.

A new economy is emerging – a new economy with new business models, new tools, new audiences, new customers, etc., which contrasts with the old economy of how you make money, learn, and so on. So it is vital to make sure whatever you do, that you are connected to the new economy and the changes that come along with it. That means being aware of what's happening and willing to adopt new tools to keep your mind open but not be fearful. Fear has never led to greatness. There's a reason why creative people tend to be optimists – because you have to believe in the possibilities.


Singularity's team of internal thought leadership works to develop interesting resources, articles and insights about our core areas of expertise, programs and global community.

Download the asset