Newsletter
As part of the “AI-Humanitas: Enabling AI-Skills” series, Dr. Wolfgang König and The Decoder sat down with Daniela Todorova (Director Learning) and Lara Bems (Skilling Program Manager) from Microsoft Germany. The conversation focused on how Copilot is used in everyday work, what sets Copilot apart from ChatGPT, the requirements of the EU AI Act, and why AI skills are now a basic necessity for companies.
König: What exactly does a Director Learning do at Microsoft Germany, and how do you personally define AI competence?
Daniela Todorova: As Director Learning at Microsoft Germany, I am responsible for the strategic direction and implementation of learning and training initiatives. The goal for me and my team is to promote a culture of lifelong learning – for customers as well as for our employees and young talents.
For me, AI competence means having a fundamental understanding of artificial intelligence, data and ethical principles – combined with the ability to think critically and take responsibility. The understanding of roles is crucial here: we humans remain captains; AI is our co-pilot.
However, as AI agents take on more and more routine tasks autonomously, they also need what is known as “dual team capabilities”, which allow them to work successfully in a team with other people as well as with the AI. Only those who remain curious, adaptable and willing to experiment will be able to develop this “dual team capability”.
The Decoder: Isn’t there a contradiction here? What does teamwork with AI agents look like in practice? Aren’t these systems too unreliable to be more than co-pilots? And what’s the technical difference between a co-pilot and an agent?
Todorova: For me, there is no conflict in the fact that we as humans remain the captains and at the same time AI agents autonomously take over routine tasks. I like to compare this to a modern airplane: the human sits in the cockpit, makes the decisions and bears the responsibility. But the autopilot, in our scenario the AI, takes over many tasks that are standardized and repeatable. This takes the pressure off us and creates space for what really counts: Creativity, strategy and humanity.
AI can suggest, analyze and automate, but it follows our instructions. And if there is turbulence, we intervene. So, this collaboration is not competition, but a new form of teamwork.
To increase reliability, we need a combination of clarity and purpose, governance, monitoring and human responsibility: the AI agent can only act reliably if its purpose and tasks are clearly defined. Training plays a key role here: reliability depends largely on how well the agent has been trained.
With a view to governance, it is advisable to implement so-called “guardian AIs” that monitor other agents and intervene if they deviate from the course. In this context, it is important that all AI agent decisions are documented and auditable, and that risk analysis and mitigation tools are in place to identify potential weaknesses.
To answer your last question: The difference between a co-pilot (e.g. Microsoft 365 Copilot or ChatGPT) and an AI agent is enormous. While co-pilots cover the area of generative AI, we speak of agentic AI in the case of agents. Generative AI is a form of artificial intelligence that can independently generate new content such as text, images, music and code. However, it always needs a human impulse, a prompt, to act.
Agentic AI, on the other hand, goes one step further. It is designed to pursue goals independently, make decisions and control multi-stage processes without a human having to initiate every step.
König: The EU AI Act brings new requirements, especially the obligation to foster AI skills in organizations. In your view, what practical steps should companies take if they want to deploy AI tools like chatbots for their workforce? What’s the minimum level of AI competence employees should have?
Todorova: To meet the requirements of the EU AI Act and provide Microsoft Copilot to the workforce, organizations should foster a learning culture in which leaders act as role models and support cross-functional collaboration.
Regarding AI skills, managers should create the framework for an internal AI learning task force to regularly explore AI use cases together and gain learning experiences. This is the only way to build a culture of learning and experimentation that encourages curiosity, a positive error culture and informal peer learning. The most important thing here is to teach employees the components of responsible AI: Fairness, reliability and security, data protection, transparency, accountability and inclusion – these are also our principles for responsible AI at Microsoft. After all, companies that use responsible AI solutions not only strengthen trust in technology but are also drivers of innovation.
König: Microsoft also has to make sure that tools like Copilot meet the EU AI Act’s requirements. What’s especially important for businesses that want to use Copilot? What skills matter for providers versus operators?
Todorova: At Microsoft, we are committed to providing our AI solutions in compliance with the EU AI Act. What’s more, we work closely with the EU AI Act Office, relevant authorities and member states to share our findings, clarify open questions and develop practical solutions.
Internally, we promote compliance with the requirements with regular learning days, hackathons and AI skills challenges for our employees – both throughout the company and on a role-specific basis.
In addition to teaching AI skills, it is important for companies to understand their own AI footprint. This includes regular review and risk assessment of the AI solutions used, as different requirements need to be met depending on the type, model and system. Ensure that your governance framework meets the requirements of the EU AI Act to secure responsible development and deployment of AI systems.
And finally, I can only recommend that you actively participate in the regulatory process: Stay informed about regulatory developments on an ongoing basis and engage with policy makers and industry associations to ensure compliance with new regulations in the future.
König: We’ve talked about using AI at work. From your perspective, what role does learning with AI-powered tools play in today’s workplace?
Lara Bems: In the field of learning, AI offers us incredible potential. AI can help us to develop new learning methods and design our own personal learning journey tailored to our needs. Generative AI can also be used to create new learning content and provide personalized learning recommendations or feedback. For example, I used Copilot as a quiz master to prepare for an upcoming certification – and passed.
In order to help companies build precisely these skills in employees, but also to gain an overview of which skills are already available in the company, Microsoft is launching „People Skills“ in June, a service that aims to record and manage the skills and competencies of employees and the appropriate skills agent – integrated into Microsoft 365 Copilot and Microsoft Viva.
König: AI tools are changing our workplace culture. What cultural and structural shifts are necessary when AI becomes a regular part of daily work?
Todorova: The introduction of AI solutions in a company is unlike traditional software projects from the past, where only the IT department is involved. AI influences the entire organization across all departments and roles, and its implementation should be closely monitored accordingly.
This is where managers play an essential role in promoting a culture of change and encouraging experimentation. They should lead by example, break down silos and promote cross-departmental collaboration.
In addition – and I always like to emphasize this as part of my role – organizations should prioritize learning and training. Charles Jennings, co-founder of the 70:20:10 Institute, sets out a simple but effective formula for success here: The speed of organizational learning should be greater than the speed of external change. Companies achieve this by promoting continuous learning and maintaining agility in times of rapid change.
Depending on the size of the company, it may also make sense to create new roles, such as a Chief AI Transformation Officer, and set up a type of transformation office to support the change management process, particularly regarding company-wide communication and deployment. In this way, new AI use cases can be safely explored.
The Decoder: Many readers are probably wondering: What’s the difference between Copilot and something like ChatGPT?
Bems: ChatGPT and Microsoft Copilot are both based on generative AI and use GPT language models from OpenAI. Both tools therefore offer the perfect creative sparring partner: I can use them to brainstorm, draft texts or refine my ideas. The tools are versatile – even in everyday life.
Microsoft 365 Copilot is also my productive assistant in my day-to-day work. Unlike ChatGPT, it is directly integrated into my Microsoft 365 environment – in Outlook, Word, Teams, Excel and PowerPoint. It therefore knows the context of my work, my appointments, emails and documents. It can help me to complete my tasks faster, summarize content from meetings or documents or even make relevant suggestions for programs and strategies.
Another important difference lies in data protection and compliance: unlike an external AI solution such as ChatGPT, with Microsoft 365 Copilot my input and data remain entirely within my company’s protected Microsoft 365 environment. Copilot meets strict data protection and compliance requirements (e.g. GDPR) and does not use user data to improve the AI model outside. This means I can reap the benefits of AI without violating company privacy policies or compliance regulations.
König: To wrap up, a personal question: How do you use AI-powered tools in your work or personal life—and what matters most to you?
Todorova: I use Copilot’s capabilities daily for various tasks. For example, when writing my communications, I make sure that the tone and key messages are balanced and clear. I also generate quick insights into market and industry dynamics to prepare for client meetings, improving relevance and approach.
Recently, to help my son prepare for his history exam, I have been brushing up on my knowledge of certain historical events. In addition, I have used Copilot to suggest development potential based on my previous experiences.
We are preparing for the next fiscal year by focusing on opportunities for improvement. My team has identified three process-related problems that we are tackling by using AI agents. The first agent is designed to help speed up our collaboration with our partners through automated follow-ups. The second helps the team to simplify the planning and tracking of our premium trainings for our customers – through automatic reminders and assignment of responsibilities.
These are a couple of very different scenarios, but they show one thing: An open approach to AI technology and a willingness to experiment in different scenarios are key.
Bems: I personally use Copilot for classic use cases such as summarizing meetings, prioritizing my tasks, or as a sparring partner for innovative ideas for our training initiatives.
I’m also discovering the world of agents and have already built two myself: one helps our team to quickly find the resources, information and news they need for upcoming client meetings – the other agent is my personal writing coach and helps me to refine white papers and articles.
But I also use Copilot or ChatGPT in my private life: to create birthday invitations, for example, but also to suggest recipes, shopping lists or an entire vacation plan.