Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Ubuntu AI podcast: What is Generative AI?

Hasmik Zmoyan

on 16 October 2023

Tags: AI , AI/ML , podcast , Ubuntu

This article was last updated 1 year ago.


From fun experiments to enterprise projects, AI became the center of attention when it comes to innovation, digital transformation and optimization.

Open source technologies democratized access to state-of-the-art machine learning tools and open doors for everyone ready to embark on their AI journey.

These days everyone talks about generative AI and everyone wants their own ChatGPT. Many companies feel anxious of not having already a generative AI projects in production. In this episode we will dive into the Generative AI.

So what is generative AI?

Basically traditionally whenever you build any kind of AI system, this could be defined as generative AI.

We are training ML models so that they can make predictions, classification or new and unseen data, right? These ML models are working with numbers, with text, with pictures. But basically if we input an image of a dog to a model, it will do a discrimination and tell us if it’s a dog or it’s not a dog or it’s a cat or something like that.

In generative AI the approach is different.

We are training ML models. So that it creates an output similar to the data it has been trained on. So basically this kind of models work with different kind of data: text, images, audio and so on.

And in the dog example, this would be like: if we have a model trained on text from various sources and we prompt it with a question like “what is a dog?” We might get an output in form of a description. The dog is an animal that has four legs blah blah or a picture of a dog or something like that.

So this is a little bit different approach to the topic and hence it’s different. It also involves a lot of different tooling there. The whole envelopes processes are different. You have like LLMOps or any kind of different strange naming to basic operations. So that’s what I think is the most important difference between them and that’s why people consider it a breakthrough or a new thing. But actually it’s still based on the same mathematical theories from 30 years ago.

And what are LLMs?

LLMs would be a subset of generative AI. This large language models is something that we would use for either text classification or doing Q&A – the famous chatbot experience: summarizing documents or generating new documents.

LLMs are basically having a couple of different categories in them. There are (1) generic language models that are doing, in a glance, very simple thing: they are predicting the next word or a phrase based on the language or in the way how this should be written. So basically it’s sort of an advanced version of your email auto-completion.

But there are also other types of LLMs. There are (2) instruction tuned models and basically they are trained to predict a response given an input. So models that are summarizing text, summarizing like 50 pages long research paper into one page summary, that’s also what LLMs can do.

There are also (3) dialogue tuned models which your famous chatGPT and other kind of chatbots would be a typical example of.

Where to learn more about AI?

First, make sure to subscribe to our bi-weekly podcasts, where we are discussing AI. Listen to our podcasts on Spotify or Apple Podcasts.

If you want to learn more about all the AI solutions we provide check our our website and feel free to get in touch with us via contact forms or live chats here.

Talk to us today

Interested in running Ubuntu in your organisation?

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Canonical at Cloud Expo 2024

The Cloud Expo Madrid 2024 will run on 16 and 17 October at IFEMA Madrid. Come down to our booth, K87, to meet us and chat about Cloud, AI, ML and more.

Canonical at India Mobile Congress 2024 – a retrospective

With an ambition to become Asia’s technology hub for telecommunications in the 5G/6G era, India hosts the annual India Mobile Congress (IMC) in Pragati...

Join Canonical in Brazil at Dell Technologies Forum São Paulo

Canonical is excited to be a part of the Dell Technologies Forum in São Paulo on October 30th. This exclusive event brings together industry leaders to...