What do I, as a young professional, should know about Generative AI?

Last week, we hosted our second YPN event, dedicated to Generative AI and its implications for young professionals. The event offered an intriguing discussion on how artificial intelligence is shaping the future of work and what young professionals should take into consideration while using it. The session featured a fireside conversation moderated by Mikko Ketokivi, Professor of Operations Management & Organization Design at IE Business School, with expert insights from Dr. Rana Raouf Farag, PhD, MBA, Head of Google Cloud AI Architecture EMEA-S, who explored both the opportunities and the responsibilities that come with using AI technologies.

Photo: Sami Auvinen/Fotonordica

Traditional AI vs. Generative AI

The first topic that was addressed was the difference between traditional AI and generative AI. To answer this question, non-generative (traditional) AI often relies on unsupervised learning methods, where systems analyze patterns in data and learn from user behavior, clustering information based on how people interact with systems. Generative AI, on the other hand, is designed to produce new outputs, such as text, images, or summaries, by making predictions based on large amounts of training data gathered from various sources. Importantly, as Rana emphasized, AI itself does not “know” anything; it functions purely as a tool that generates responses based on patterns and probabilities within its training data.

A central message throughout the discussion was that AI should be seen as a tool that helps people, particularly by lowering the margin of human error and supporting tasks such as content creation, research, and summarization. However, because generative AI systems draw on extremely large datasets from different contexts, the outputs can seldom be overly generalized or lack precise accuracy. For this reason, verifying information and checking sources remains essential.

Another critical topic was the importance of prompting. The accuracy of generative AI responses largely depends on the quality and specificity of the question being asked. Vague prompts increase the likelihood of incorrect answers or so-called “AI hallucinations.” According to Rana, one of the most important skills for modern professionals is therefore learning how to ask the right questions. It is also important to remember that AI can generate ideas and suggestions, but human judgment is always needed to refine the results and decide how to use them.

Using AI Effectively and Responsibly

The conversation also explored the concept of AI agents, which differ from typical generative AI chat tools and are increasingly growing in use and demand. While chat-based systems mainly generate responses, AI agents can be assigned specific tasks, personalities, and permissions that allow them to act on a user’s behalf. In practice, multiple agents can work together as a system: for example, one agent might gather information, another might summarize it, and a third could act on the insights provided. Dividing tasks among specialized agents can improve both efficiency and accuracy.

Responsibility was another recurring theme. As Rana noted, abstaining from AI is not a realistic option in today’s rapidly evolving technological landscape. Instead, responsible use is key. Because AI outputs depend on the data used to train each system, different tools may produce slightly different answers. Users must therefore rely on critical thinking, verify sources, and maintain accountability for how AI-generated content is used.

The discussion later expanded to include Johanna Jacobsson, Adjunct Professor at IE Law School and Founder & CEO of Lawcrosse, focusing on education and the responsibility of preparing younger generations to use generative AI effectively. The consensus was that this responsibility should not fall on a single course or individual but should be shared across educational institutions and disciplines. Proper training in AI literacy is essential, as misuse or uncritical reliance on AI tools could lead to issues such as misinterpretation of context or copyright challenges.

Finally, practical advice was offered for improving interactions with generative tools. One recommended approach is to prompt AI systems to respond only when they have sufficient factual information and to rely on verified, factual sources whenever possible. This helps reduce inaccuracies and improves the reliability of generated responses.

Overall, the event highlighted both the potential and the responsibility that come with generative AI. For young professionals, the key takeaway is clear:

“AI can be a powerful assistant, but its effectiveness ultimately depends on the user’s ability to ask thoughtful questions, verify information, and apply their own judgment”

Photo: Sami Auvinen/Fotonordica

Next
Next

Designing The Future: AI, Sustainability and Strategy