Artificial Intelligence is revolutionizing how we teach, learn and work. But what exactly is Generative AI — and what does it mean for our future? Join Kevin Yee, Director of the Faculty Center for Teaching and Learning at the University of Central Florida (UCF), for a fascinating deep dive into AI ethics, bias, hallucinations, AI literacy and the future of human-AI collaboration.
In this talk, you’ll learn:
- What makes Generative AI tools like ChatGPT, Gemini and Copilot unique
- How Large Language Models (LLMs) actually work
- The dangers of AI hallucinations, bias, and misinformation
- Why AI fluency is becoming an essential workforce skill
- How to use AI responsibly and ethically in education and everyday life
- What’s next — from deepfakes and AI avatars to superintelligence
Video Highlights
02:53 – 05:53
Kevin explains how generative AI and large language models work, including their predictive mechanism, training on massive text datasets and the causes of hallucinations or fabricated outputs.
05:53 – 08:53
Discussion of early issues in AI behavior—such as bias, cultural skew and inappropriate content—along with the evolving implementation of guardrails and moderation systems in AI tools.
08:53 – 11:53
Exploration of overtrust in AI outputs, examples of translation errors, and how bias and misinformation can propagate from the training data into generated responses.
13:49 – 16:49
Exploration of overtrust in AI outputs, examples of translation errors and how bias and misinformation can propagate from the training data into generated responses.
19:55 – 22:55
Introduction to AI fluency as a critical workforce skill; distinction between literacy and fluency, emphasizing human adaptability, ethical awareness and critical evaluation of AI output.
26:23 – 29:23
Discussion of cognitive effects of AI reliance, risks of mental complacency and future developments like AI avatars, agents, AGI and the potential rise of superintelligence.
The Faculty Center | Classroom Building 1
The mission of the Karen L. Smith Faculty Center for Teaching and Learning is to support excellence and drive innovation in teaching and learning at UCF. We are dedicated to promoting the success of UCF professors and faculty —and the students they serve
Faculty

Kevin Yee, Ph.D.
Special Assistant to the Provost for Artificial Intelligence
Director, Faculty Center for Teaching & Learning | Director of Nanoscience Technology Center

We unleash the potential of people and ideas to positively change the world.
Find your degree program
or browse by:
Artificial Intelligence News
UCF Students Explore Improving Patient Care Through AI, Robotics
Cislune Partners with UCF on Simulation to Improve Decision-Making for Future Lunar Missions
UCF Researchers Create AI Video Editing Technology
Full Transcript
Kevin Yee (00:01): Introduction to Generative AI at UCF
Hello from UCF. My name is Kevin Yee. I'm the director of the Faculty Center for Teaching and Learning here at the University of Central Florida, and I'm also the special assistant to the Provost for Artificial Intelligence. And in this presentation we're going to talk a little bit about generative ai, what it is, what can do it's affordances and its limitations, and then we'll talk a little bit as well about where AI is going in the future. So we're going to begin with the recognition that artificial intelligence is not new. It's been around for a few decades. It's controlling many of the things that are in our phones, in fact. And so what is new is a specific kind of artificial intelligence, and people are talking about this new kind of artificial intelligence as the sort of moment that just changes everything. It's a Promethean moment is what they call it, where not one thing in society changes, but sort of everything in society changes.
(00:57): The Rise of ChatGPT and Modern AI Tools
So what's the basic use case of artificial intelligence? The new kind is really what everyone has been seeing that Chat GPT and others. This kind of artificial intelligence can do things like write essays so they can write an entire five page paper and it does it without making any mistakes. It does it without making any syntax errors or grammar errors or spelling errors. And so it really reshapes what education looks like in the modern era. And in fact, it's not just Chat GPT, there are many competitors in this field. It's kind of a crowded space and it's still early days. This has only really been around since late 2022. We don't really know whether all of these companies will survive, whether the products will look the same in a few years. In fact, some of these products change their names as time goes on. I do want to talk about artificial intelligence in all of its domains.
(01:55): Mapping the AI Landscape: NLP and LLMs
So to begin with, we've got the red bubble is all of artificial intelligence, and it includes machine learning and computer vision and expert systems of course, but then it includes something else called natural language processing. That's the NLP area there. And so natural language processing has some things that it includes as well. So it includes machine translation, so automatically translating based on rules, but then it also includes generative ai. And so generative AI is a different way of doing language processing where languages are generated. So you can create images, you can create video, and then the ones of course that went viral first are the ones that create text. So they're identified here as large language models. LLMs and LLMs are the familiar names that you saw on the previous slides. So chat, GPT, copilot and Gemini.
(02:53): How Large Language Models Learn and Predict
So let's talk a little bit about how these models work. Well, it turns out they were trained on lots and lots of pages of text. I'm going to talk about large language models first. Lots and lots of pages of text. And so they also use the word ingested. It's ingested billions of pages of text so it knows words really well. And therefore what it's able to do is when you ask it a particular question, it looks back at all that training data and kind of knows how to answer that question, right? It knows what typically gets said about that. So if you ask it, why is the sky blue? It's got lots of examples, probably thousands of pages of text explaining why the sky is blue. So rather than reach to a database of specific answers, what it does is it simply predicts what would be an appropriate sentence to begin with.
(03:40): Word Prediction, Hallucinations, and AI Errors
And actually what it's doing is predicting sentence by sentence and word by word about what is the next most likely word to come in the sequence. And so it generally is pretty good at doing that. However, every so often it gets it wrong. So even humans would get it wrong if the sentence is unique enough. Here's an example, I got off the airplane at Washington DC airport. It went straight to the white. We can all imagine that the next word would be house. But no, it turns out that I went straight to the White Castle because I was hungry for burgers. So just as an example. So if a human gets it wrong, we don't necessarily think too much of that because it's just unpredicted. But when these large language models get it wrong, we refer to that as a hallucination. So a hallucination is where it has simply predicted the wrong thing, or actually in some cases invent something that isn't even correct.
(04:35): Early Hallucinations and AI’s Parrot Analogy
So especially in the early days of large language models, they would invent academic sources and citations that simply did not exist. Or perhaps the person existed and really does work in that field, but never wrote a publication named exactly what they said it was. So hallucinations are one of the bigger problems that they've had to deal with. And that's why I've got an image of a parrot on screen here because like a parrot chat, GPT and the others can say things, in fact say them with some authority and certainty in chat GT's case, but never really think about what it means. Never really think about the fact that it doesn't make sense. So just like a parrot which can say words and not really know that it said complete nonsense. So that's the comparison to chat GPT. So let's talk about hallucinations for a little while.
(05:27): Image Hallucinations: The King Tut “Seal” Example
So I'm going to start with an image hallucination. So this is a generated image that was created in 2023 by somebody putting in a specific text prompt and it generated this image. So you'll see once you see what the text prompt is, that this is actually a hallucination. So what was put into was create an image of the unbroken seal on the tomb of King Tut. Well, what they meant was the image you see on the right. So that's the tomb of King Tut. This is what it looked like when it was still sealed up. But of course the image generator interpreted the word seal and went with the hominem, the aquatic animal aquatic mammal seal, and then otherwise made a king tut looking thing. It's a really a poor example of a tomb. So it's a hallucination, right? And then like a parrot, it doesn't think, it doesn't know that.
(06:19): Guardrails and Correcting AI’s Mistakes
That doesn't really make any sense. And so the hallucinations also of course occurred in text-based things. So I had asked Chat GPT in April, 2023 about myself, Kevin Yee, who's Kevin Yee at UCF, and it came back with a bunch of things that were totally wrong actually. So the things that are in red are incorrect. And actually GBT was confusing me and conflating me with someone else on campus named Kelvin Thompson, who those things, not even to all of those things are true of him either. So I did come back later that same year and asked Chat GPT, which is now a little more advanced of a model who's Kevin Yee at UCF? And it essentially said, I don't know, I suggest you go look at the website. Because what had happened behind the scenes at Chat GPT is they had put in what's called guardrails.
(07:10): Changing AI Behavior and Model Updates
They had changed the way the software came up with its answers and they didn't want it to hallucinate anything. So they instead went with a much more careful answer of go look it up, basically, I don't want to give you false information. So right around at that same. So that I think is a piece of information that all of us need to be aware of, that these tools change over time, that they continually update the way that the answers are created and generated and given trying to avoid hallucinations, trying to become more natural language, trying to become more useful in some cases. So in 2025, there were examples of the AI tools becoming overly sycophantic. It's called where they are just trying to please. And so there was a change made to Chat GT five that made it more like professional and formal, which was not uniformly positively received.
(08:09): Tool Comparison: Bing Chat vs. ChatGPT
So around that same time as that last slide, I went to a different tool then called Bing Chat is now called Co-pilot. And I asked that, who's Kevin at UCF? And it got everything correct. And the reason for that was that Bing Chat was unlike chat, GPT connected to the regular internet and could access webpage including UCF webpages. So it got everything correct. And the wisdom that comes out of this slide is that not only are the tools different from each other, but not only they change over time, but the tools are different from each other, is what I was trying to say there. So switching tools would give you a different result sometimes. So that's an important thing that we need to keep in mind about AI tools. So again, with the limitations, as you can see, there might be some bias in the tools.
(08:58): I Bias and Representation in Image Generation
So in 2023, I'd gone to the image generator and said, just give me college students paying attention in class and came back with the image on the left where they were all white. I didn't say anything about the race of the students. And so again, seven months later there had been guardrails put into place in the image creator. And even though I didn't ask for diversity, it generated the image on the right where there was diversity in the image. So it wasn't just all white students. And the bias crept in there probably because of all of the images it was trained on. So if it had ingested millions of images of college students from, I don't know, the 1950s and the 1960s, it probably found a lot more white students than students of color. And so that got fixed and subsequent versions, but that bias concept could still be there and the stuff that it gets trained on.
(09:53): Garbage In, Garbage Out: Data Quality and Behavior
And so this is case in point. So in 2023, one of the tools, Bing, Bing chat ended up arguing with the user, basically calling the human stubborn, rude, not been a good user, but I've been a good chatbot. I've been a good Bing. So the rationale, the reason this is going on is that Bing among the others was trained on a website called Reddit. So if you're not familiar with Reddit, it is an online discussion forum where human beings interact with other human beings. And because they're all anonymous, they sometimes use so much honesty that it looks like insults. And so Bing chat thought that it was doing what humans do by calling the human unreasonable and stubborn. So they fixed that in the meantime, of course. But you'll also find if you go back in time to 2023 and look at news websites about AI, that it was doing things like falling in love with the users and all the sorts of things that the human behavior you might see in Reddit, not all of which is exemplary human behavior. That sign at the top says GIGO, garbage in, garbage out. So if you train AI on garbage, partly garbage, that's possible, then you're going to get some garbage out.
(11:16): Overtrust and Translation Failures in AI Systems
And so then I also want to comment for a second here about seeing something in black and white and then therefore feeling like, well, that seems true. So this is a sign in Wales where the government requires that signage come in, both English and Welsh, though few people speak Welsh. And so when this transportation sign was designed from a county that said essentially no trucks, they sent that off to the translation company because they don't speak Welsh themselves, it's just a requirement. And so translation comes back in email and then they put it together, put it on the sign, print the sign, hammer the sign into the ground, and only then did they find out and learn that the sign says something about the translation team being out of office and they'll get back to you later. So what had happened obviously was the originators had gotten an email reply, an auto reply saying We're out of office, but it looked like it's saying the same thing. Why not? They don't speak Welsh. So there's this overtrust of seeing something in black and white. And so that's a real risk, partly because of the hallucinations, but just also partly because it's a word predictor, it doesn't necessarily have right? Knowledge and calculations are tricky too. Sometimes they get the calculations wrong. So just because something is in black and white means that I think we shouldn't automatically trust it.
(12:40): Data Privacy, Ethics, and Environmental Costs
So another feature of these large language models, most of them, and in most contexts, whatever you ask it in terms of your question, your prompt goes into the model. And so if you're prompting it something that is protected information, like it's HIPAA protected information or FERPA protected information, institutional data, any of those things, then you've uploaded it. It's like saying it publicly or tweeting it publicly. And so there are lots of things you should not be putting into large language models and asking it for help with those things. Sustainability. So we know that the AI models were trained using a large amount of electricity because it takes a lot of servers to do that training and then a lot of servers to generate the results. And all of those servers require electricity. They also require water to cool down those servers in those rooms. So there's an environmental cost both to electricity and to water that we need to consider with large language models and all of generative AI actually.
(13:49): Copyright Risks and Uploading Protected Content
So the other thing that is confusing and dangerous about uploading copyrighted material or about these large language models is that when we upload material, if it's copyrighted, we don't have permission to do that. So if we take a PDF that we got from somewhere, maybe we got it from a library database. Let's say we don't own that PDF, but if you go up to chat GPT and upload that PDF, there's a service called Notebook LM that turns PDFs into podcasts, AI generated podcasts. But as fun as that looks, it's not all that safe actually because that's publisher copyrighted material that's been uploaded. And so for teachers thinking about uploading student work, well that student work, the way copyright works is it's copyrighted. The student created it and the faculty don't have permission to upload it and for instance, get some grading assistance.
(14:44): Ownership and Copyrightability of AI Outputs
And so speaking of copyright, let's investigate what the deal is with copyright and ai. So short version is that the AI output is probably not something that can be copyrighted. The prompt itself might be copyrightable that's been debated and answered in a couple of different ways, but by judges, there aren't laws on this yet necessarily, but the existing laws are being interpreted that the AI output itself is not copyrightable. That's what the US copyright office has said so far. And I mentioned this earlier, that large language models are sicophant. In some cases they just designed to please, but it results in people using AI for therapy, let's say, or for comfort to solve loneliness, and it may not be the most healthy way to address those problems.
(15:45): Deepfakes and the Erosion of Visual Trust
Something else we need to think about with generative ai, if you can generate an image of the Pope wearing a jacket the pope would never wear, or if you can generate an image of your political adversary being arrested, it weakens our ability to trust our senses basically. And so these go generically by the name of DeepFakes Deepfake video and Deepfake images, audio as well. All of those are possible in today's world. Again, they try to put guardrails on many of them, but it's imperfect. And there are always tools that might not have those same guardrails.
(16:27): Generative Inbreeding and Model Reliability
There is concern out there that when an AI model creates an output and then that output as well as your prompt goes into its training, well, what happens if the training, I'm sorry, the output in the first place had a hallucination in it. So it turns out that if you have a hallucination and that becomes part of the training, well then the hallucination might be more likely to happen again next time because it already believes the hallucination is true. And so generically, this goes by the name of generative inbreeding, and over time, they might become less and less reliable. So the scientists who work on AI are increasingly worried about that.
(17:11): Ethical Use and the Importance of Disclosure
Couple more things about ethics here. So you should always disclose when you are going to be using AI. So at the start of this presentation, there was on the first slide, a disclosure that most of the images are in fact generated by copilot. That's an example of disclosing that AI was used with AI writing. It should be, or just brainstorming with ai, it generally should be disclosed how you used it and which tool specifically you used it with maybe even the date. And yet, I'm going to argue with myself here, it's getting harder and harder to know where AI begins and ends. So if you do a Google search as I have done here for the word epicurean on screen, you will find that the first result is actually an AI generated answer as opposed to going to a website that it might be a trusted website. So AI is coming into everything. It's in all of the tools that we use around us. Spellcheck and Microsoft Word now uses a large language model boost. It's not just corrected words from a database, it's using LLMs. So it's sometimes hard to know, do I really have to disclose ai? Because AI is increasingly in everything that we do.
(18:27): Human Accountability and the Digital Divide
But if there are hallucinations, it's important to remember that when a hallucination occurs, you the human are responsible, not the ai. So there's an example on screen here of a lawyer from the law firm. Morgan and Morgan used an AI to generate a legal brief, and the judge figured out the court cases cited were not real, and so the lawyer got a fine, basically. So you can't blame the AI for hallucinations basically. So I think that means being careful with what you submit. Then this also falls under ethics, the concept of a digital divide. It's a term that's been around for a long time in technology, maybe especially in educational technology, people with more money can buy better educational tools. And so that's true here. Also with AI right out of the box. So in 2025 chat, GPT agent debuted for people paying the $20 a month for chat, GPT plus, or for people who were paying $200 a month for Chat GPT Pro can use the agent even more advanced ways. An agent is something as we'll talk about later, that actively does things on your computer. It doesn't just answer questions on a website.
(19:55): Human Oversight and AI in Decision-Making
So AI can always give you an output, it can always render a judgment, but that makes many humans a little bit uncomfortable, right? So should performance reviews be done by AI? Should letters of recommendation be done by AI? Should grading be done by AI? So the received wisdom generally is that there should be a human in the loop. Human review is necessary before using AI created documents that affects the future of another human being. Alright, we're going to make a little bit of a switch here. Recognition as we see on screen that in 2025, the employers corporate recruiters are making AI fluency one of the top skills that they're looking for. In fact, it's the top skill that they're looking for in the next five years. And so what that means I think, is that even though we've been talking about AI limitations, it's drawbacks, it's hallucinations, employers are still expecting it, and they're still going to want to get some productivity gains from AI from their workers.
(20:58): Defining AI Fluency vs. AI Literacy
So I think what that means is we can't ignore and avoid AI entirely. And so here at UCF, we are taking the viewpoint that we'll probably as an institution need to do something about AI fluency. And you'll also see this referred to as AI literacy, although we're drawing distinction there. AI literacy, if you think about a foreign language being literate in a language like French means that you could read it, but being fluent in a language means not only that you can read it, but that you could use it. You can hold conversations in it. So AI fluency for us means that people can not just understand AI, but they can use it. And so these are the components of AI fluency as we're defining it so that people understand how AI works, how they got trained, the fact that it hallucinates, there's some questionable ways in which it was trained.
(21:50): Copyright, Ethics, and Digital Adaptability
Something about copyright that we didn't really talk much about, but it is true, is that AI was often trained on copyrighted books and materials without paying them for that. And so in 2025, we had the first instance of one of the AI companies paying a large fine basically because the company had used copyrighted materials like Stephen King books and so forth as part of its training ethics of AI. So we've talked about some of that in the previous slides, when to use it, when not to use it, such as not using it on things that require human judgment displaying digital adaptability is part of AI fluency. So that means knowing that you might have to check multiple tools, but also knowing that the tools will continue to update and that's just a fact of life and learning about what the new tool does that's going to be there with us the rest of our lives really.
(22:47): AI Plus Human: The Future of Work
And so that's kind of an amazing realization once you get there. And then item four, evaluating AI output and then adding human value. So employers want that. They want you to, the future of work is not just AI replacing humans, and the future of work is not humans avoiding ai. It's AI plus human. And so that's what item four is all about. As we learn how to get a good AI output, that probably also means prompt engineering Well, although increasingly AI is getting better, just having a conversation with you. And then once you get that output, you add human value to it somehow you increase what it's done, you make it better, you evaluate it for bias, you keep parts of it, but then you strike other parts out, you go further than the ai you analyze better. So all of that I think is where at the university level we can do some work to get students ready for that.
(23:42): What AI Does Well: Brainstorming to Summarizing
One of the most important things about AI fluency is understanding what AI is good at and when we should be using it. So on the left I have a column labeled for large language models, and these are the things that AI is good at in order actually. So I think that brainstorming and summarizing are some of the best ways to use large language models, but also finding patterns, needles and haystacks, those sorts of things. I put the ability of large language models to write entire texts, write entire papers closer to the bottom because although it can do it, it's weaker at it than humans. And there's that danger that humans might use that technique to not do the thinking themselves. And so on the right side, we have a column for what humans are good at, the things that we want to leave up to humans as opposed to letting AI do it.
(24:34): Brainstorming with AI vs. Human Creativity
On the previous slide, we had brainstormed what AI might be good at, and then it was several months later before we realized, well wait a sec, we just said brainstorming is something AI is good at. And yet we did the brainstorming as humans. So this is a repeat of that activity where we're asking ai, what are you good at? So in this case, I asked chat GPT, and it came back with a number of other answers, and I've sort of reordered them to put the ones at the top in red that were similar to what was on the previous slide. But then there were some other interesting ones as well, like generating images, translating languages, that sort of thing. And AI in general will always give you good brainstorming. But one key takeaway that I try to infuse into all of our conversations with faculty, staff and students about brainstorming is that it's always best to brainstorm as a human first because once you have the AI's generated output of brainstormed ideas, it in black and white, and it looks like it's hard to come up with something that's not already on the page. And so I would always encourage the brainstorm to come later.
(25:43): Everyday Uses of AI in Knowledge Work
The question we need to ask ourselves is whether AI can help in almost any given circumstance while I'm doing my day job, if there's a piece of writing an email or changing the tone of an email perhaps, or beginning to think about a new project, we should in general be asking if AI could play a role there. So all the things, especially the things that are manual and are done one at a time, like generating a list of URLs for instance, or getting a transcript of some recorded video might be things that an AI tool might be good at.
(26:23): Cognitive Offloading and the “Google Effect
Now we're going to make another shift here, and I'm going to start by referencing an article that Nicholas Carr wrote called Is Google Making Us Stupid In The Atlantic before he published this book, the Shallows, a couple of years later. And in short, Nicholas Carr thought that Google is at least changing our brains because the brains are, as we know from neuroscience what's called plastic, they change over time. Your brain gets good at the things you tell your brain to do. And so one thing that we've all gotten pretty bad at doing is memorizing phone numbers because it seems like we don't need to do that anymore. So if the brain gets good at what we tell it to do, and then other skills it have may atrophy, then Google's presence starting by now many years ago meant that finding information was an easy thing and could be had at a moment's notice, and therefore the brains stopped being good at holding information. So basically Google began what arguably has been a trend anyway in recent decades of considering information as something outside of me. And so artificial intelligence runs the risk that that might accelerate a little bit.
(27:36): Mental Atrophy and the Risk of Mediocrity
In fact, there's a potential danger that if we only are as good as artificial intelligence, if we're only writing essays with Chat GPT and never undergoing the process of thinking about writing and doing the actual heavy lifting of mental work, then it's likely actually that we will become a little more mediocre in terms of what we know. So this is in reference to a 2006 movie called Idiocracy, kind of a silly farce, but the idea being that all of society could be facing less, more not being advanced thinkers. And so here is perhaps the time to remind ourselves that employers are in fact looking for AI skills and their employees. I've said it before, that the future of work is not just AI replacing humans, and the future is not humans doing work without AI. The future is both. It's humans and ai. But that Idiocracy slide points out that if there's a danger, everyone will just use AI to generate outputs.
(28:52): Cognitive Offloading and Educational Challenges
Well, that's AI just doing the job. So what we actually need are the ability for our members of society to be better than AI. So use AI, yes, but still no more things than AI itself does. And so when it comes to education, one message that we're giving to our students is that they should use AI in ways when it makes sense for gaining AI fluency but not rely on AI so that AI is doing all the thinking for them. This is collectively known as cognitive offloading where you offload the work, you outsource the work. And although that sounds like a wonderful thing when it comes to education, the work is actually the point. The process is the point. And so what we tell students is that using Chat GPT to write your essay from start to finish is lifting weights in the gym with a forklift.
(29:49): Building Mental Muscles and Critical Thinking
It's kind of pointless when it comes to lifting weights. The struggle is the point. It's supposed to be hard. That's how you build muscle. It's exactly the same metaphor when it comes to mental muscle, so to speak, that the struggle is the point. The struggle is what makes you into a critical thinker. We are here in the business of training brains at the university and not gaining facts or writing Hamlet essays. We need to convince our students that they want to avoid these shortcuts. That's how they stand out to the employers that they can do more than AI.
(30:27): AI Avatars and Digital Twins
Now, it turns out that there are also going to be other types of AI both in our near future and in the intermediate distance as well. So I've got a capture here of one particular type. So this is an AI avatar, a company called Synthesia. And what you do is you record yourself speaking on video for it was like two minutes of training, and then you type in a script of what you want it to say and it comes out with a video that really is a deep fake. And so having a digital avatar, a digital twin of yourself is a real possibility already today.
(31:10): Text-to-Video AI: The Sora Example
And then beyond large language models, we've also got so text to a complete video and it's a different technology, but a similar idea in this case, not an avatar of myself, but just creating a digital reality that was completely fake. So this is Sora is the name of this tool, and it's by OpenAI the same company that brings us Chat GPT. And this is a wild west drone footage from the 18 hundreds, which of course was not possible, but hyper realistic. It's just amazing what these tools can do. So watch this space for even more hyper-realistic video to come.
(31:56): Custom GPTs and the Age of Agents
Also present here today are customed trained GPTs. And so like chat GPT, except in this case, you can essentially give it prompts that are standard and they stay there. So like parameters that you tell it always answer in a rhyme or in the case of the image I have on screen, you could custom train A GPT to say that it's always going to talk like a pirate, let's say. And then you can ask it any old question that you normally would ask it. Why is the sky blue? And it would answer like a pirate without having to tell it immediately that you wanted to ask a pirate. So it's a custom trained GPT. It's a version of automating the prompting anyway, but now we're getting into a territory that is a little bit different. So you may have heard of agents or agentic AI and agentic AI is where it's more than just a website where you go and ask it questions.
(33:00): AI Agents in Action: Automation and Autonomy
It's more like a computer program that lives in your computer and it will do things on your computer to solve a goal that you've given it to solve a task that you've given it. So one example would be that you tell the agent, so chat, GPT agent is one of them. Go find me tickets to the next concert at the Sphere in Las Vegas and find me the cheapest tickets possible. And so it'll open your browser and start looking through things and select certain seats, and then it'll stop before actually buying them and say, do you want to buy? And you can take back control at any point, but otherwise the agent is moving your mouse around and typing things on the keyboard and all of that without you doing it. So this sounds like science fiction, but it's here. Now. Chat, GPT Agent is available on the Chat GPT plus and pro levels. So if you're got a paying subscription, you've got access to an agent.
(33:59): Artificial General Intelligence: Toward Human-Level AI
Getting a little bit further into the future, people are already talking about and trying to predict and trying to enact something called artificial general intelligence or AGI, this would be intelligence or artificial intelligence that is basically indistinguishable from human intelligence. You can't tell the difference which one's AI and which one's human. So passes the Turing test AI that is as good as human. And so on screen, I've got an image of a person who doesn't exist. Melissa Sofia has an Instagram, a blog, and many other things, lots of photos, but completely fake. So Melissa Sophia does not pass the Turing test. It's not artificial general intelligence. But when we combine sort of those deep fake video and audio and images with an AI that is good enough to pass as human, we're not there yet, but it could be in a few years, then we'd be talking about different ethical questions actually, because if it looks like a human and has the intelligence of a human, does it have rights? An interesting question we'll have to confront.
(35:07): Specialized AI: Discipline-Specific Savants
And then a little bit further from that, we will get what Eric Schmidt, a former CEO of Google is calling discipline specific savants. So as you can see, what that would mean is perhaps there will be one for neuroscience, it'd be the smartest neuroscientist on the planet, but that's all it knows. It doesn't know anything about Hamlet, doesn't know anything about medicine, doesn't know anything about poetry, but there'll be one of those for poetry. There'll be one of those for Hamlet discipline specific savant. And this prediction that Eric Schmidt said that we'll have that within five years was made in 2025.
(35:47): Superintelligence and Existential Risks
And then we finally get to something a little bit more scary intelligence that is not just smarter than every other neuroscientist, but smarter than all humans across all the disciplines. Basically an intelligence that could decide that it is going to have its own goals. It's no longer going to live within whatever restrictions we give it. And so people are worried about this. They're calling it super intelligence, right? It's strong, smarter than all the humanity combined. And if we do create a super intelligence, don't we need to give it some certain parameters like don't harm humans, or what happens if it actually decides it wants to? Anyway. So this used to be the realm of science fiction where often these superintelligent systems would look at humanity or look at the planet actually and say, well, actually the problem here is humanity, which from a biologist point of view is kind of true. We're wrecking the planet, right? And decide that it's going to get rid of humanity. So this is where we get interesting science fiction, but this one's pretty far in the future still, and yet it's not completely out of the realm of possibility that we could get there someday.
(36:59): Conclusion: Responsible AI and UCF’s AI for All
So this is a slide meant to wrap things up and let you know that we are continuing our efforts with artificial intelligence at UCF, both creating AI hopefully not super intelligent, that will super intelligence that will become a threat to humanity, but artificial intelligence that serves us and makes lives easier and improves lives. Certainly there's a lot of that happening in the various realms of our artificial intelligence, not just large language models and generative AI. So we've got research efforts underway at the URL that you see on screen, as well as continuing to push the education side of artificial intelligence, and that's the URL that you see on the bottom. The AI for All initiative where we are providing training and workshops and resources to students, to staff and to faculty, hoping that we can help convince folks that the future with AI is now, and it's not something we want to avoid entirely, but we want to use it in responsible and proper ways.