Introducing the Unmind AI Coach (beta) Introducing the Unmind AI Coach (beta)

Introducing the Unmind AI Coach (beta)

Pete Pete

What is the Unmind Coach?

Introducing the Unmind Coach, your digital assistant for personal wellbeing and the next evolution in AI-driven coaching. More than just a chatbot, it's a powerful tool designed to empower and guide you on a journey of self-discovery, personal growth, and improved mental health. This marks the future of personalised wellbeing support, revolutionising how you approach and conquer challenges, both in your professional and personal life.

Jump to frequently asked questions

Key Features of the Unmind Coach

Maximise Work Potential

The Unmind Coach acts like a compassionate, discreet manager in your pocket, offering problem-solving assistance, day planning, and tips for workplace communication.

Proactive Wellbeing

It's a continuous source of guidance and motivation, promoting healthy choices and resilience to preempt physical and mental health issues.

Mental Health Navigation

The coach helps manage workplace-related mental health challenges, providing resources and connecting you to professional support for serious issues.

Privacy and Security

Unmind ensures user confidentiality and security, adhering to top data protection standards.

Constant Coaching Support

Available 24/7, the Unmind Coach offers reliable guidance, coping strategies, and support at any time.

Frequently Asked Questions (FAQs)

How can I get started with the AI Coach?
What does 'beta' mean for the Coach?
Will my employer have access to my conversation?
Is this a replacement for professional mental health support?
Can it provide specific advice for my unique situation?
How does the AI Coach work?
How does it learn?
How do we know what it is going to say?
How has it/does it continue to be validated?
How do you ensure the advice is appropriate?
What happens with the data, and where does the data go? Is it anonymized?
Are outputs checked by any human?
Does the data going into AI coach get stored anywhere other than what's covered by our current contract?
How do we handle the cultural bias that AI Coach might have?

How can I get started with the AI Coach?

AI Coach is currently in beta. If you would like to sign up to use it, please contact us.

What does 'beta' mean for the Coach?

This means we're actively refining and perfecting its capabilities. Your feedback is invaluable in helping us enhance its performance, ensuring it provides you with the best possible support.

Will my employer have access to my conversation?

No, conversations are private and no-one but you can access them.

Is this a replacement for professional mental health support?

No. While the AI Coach can offer guidance and resources for managing workplace-related mental health challenges, it is not a clinical expert. For more immediate or critical issues, it will connect you with appropriate support, such as practitioners or helplines.

Can it provide specific advice for my unique situation?

The AI Coach can offer general advice and guidance in various areas of personal and professional development. However, its advice is not tailored to your specific situation or needs in the way a human coach might be. It aims to provide broad support and resources to help you excel in your work life.

How does the AI Coach work?

AI Coach was built by Unmind researchers and Clinical Psychologists. Through robust guardrails and rigorous prompts, the AI Coach uses a tool called ChatGPT-3.5-Turbo, by OpenAI, to give users an engaging, ethical and friendly coaching experience. AI Coach can understand language, code, and images. When you talk to it, it gives responses like a real person.

How does it learn?

The Unmind product team and Clinical Psychology team ensures that the AI coach is kept up to date and giving appropriate responses by setting guard rails within the prompt. We do this based on clinical best practices, user research and feedback, and looking at anonymous chats to give users a highly personalised and safe experience. AI models are known to sometimes say things as if they're true, even if they're not. We've guarded against this by telling our AI Coach not to give fact-based responses, as explicitly detailed in the prompt.

How do we know what it is going to say?

Whilst we cannot predict every response, we have guardrails, tests, and a dedicated team of experts who review what the AI Coach says and continually learn from this. We grant the AI Coach enough autonomy to engage in human-like conversations, however making sure it's safe and accurate is our primary focus.

How has it/does it continue to be validated?

Our product team and Clinical Psychologists are continuously testing the AI Coach. We've also developed specific protocols to make sure we offer helpful resources and handle any potential crisis situations effectively.

How do you ensure the advice is appropriate?

The coach is there to provide support and can direct users to appropriate resources as determined by our Psychology team. Still, it’s important to note it will not give medical advice.

  • We have a dedicated team that manually tests the responses.
  • We have guardrails in place to ensure the responses are appropriate.
  • We have a set of testing tools that are run each time we make an update.
  • OpenAI has a set of safeguards and safety mitigations that prevent their models from being misused; any usage of their models requires adherence to these policies, including our use for the AI Coach. You can read more about these policies here.

What happens with the data, and where does the data go? Is it anonymized?

All data is kept 100% anonymous, and it can and never will be traced back to any specific user or employer. When you talk to our AI coach, we save anonymous copies of your conversations in our database. This helps us improve how the coach works; if you come back later your past chats will still be there, making it easier to continue from where you left off. This also helps the AI get better at understanding you over time.

Our team looks at these anonymous conversations to make sure the AI Coach is acting right and to learn from them. To respond well, ChatGPT-3.5-Turbo needs data sent to OpenAI. After 30 days, all your data is deleted from OpenAI's system. We never use your chats to teach their models. Your privacy is important to us, so your chats are only between you and the AI Coach and don't affect how it learns.

Are outputs checked by any human?

Yes, humans check the AI Coach's responses. Our Science team looks at anonymous copies of conversations stored in our database for quality control and improvement. OpenAI also works with lots of different experts to make sure their models are safe. You can learn more about this here.

Does the data going into AI coach get stored anywhere other than what's covered by our current contract?

No, the data you give to the AI Coach is only stored where our current contracts allow. While we do use other third-party tools, they only process the information temporarily, meaning it's not stored long-term.

How do we handle the cultural bias that AI Coach might have?

At Unmind, we take cultural sensitivity seriously. We recently published an in-depth scientific study of the international validity of our Wellbeing Tracker for UK/ANZ/US territories. We also take a proactive approach to ensuring the AI Coach is safe and credible, including having clinicians from our Psychology team review anonymised conversation transcripts and continuously working to make the Coach better. We’re always keen to hear from users from different cultures about their experiences with the AI Coach. Please share your feedback here.

The AI Coach is made by a diverse team from around the world, but like any product, it might have some cultural biases. We understand that mental health is seen differently in different cultures, so we work hard to make sure the AI Coach respects those differences. Our psychologists help guide the AI Coach's responses, considering different cultural views on mental health.

It's worth noting that the AI Coach learns from a lot of text data, much of which comes from the internet. This data might have its own cultural biases, mainly from English-speaking and Western sources. Sometimes these biases might show up in the AI Coach's responses.

Although the AI Coach doesn't collect personal data, like your name, email, or company name - it does look at anonymous transcripts and learns from the information you share during your sessions to give better advice. This means your background (personal data) doesn't affect the AI Coach's responses. Our goal is to make sure the AI Coach is helpful to everyone, no matter where they're from.

For more information, please contact our support team.

Add comment

Article is closed for comments.