What Does AI Think I Look Like?

A digital identity experiment (and a cautionary note on privacy…)


Lately, my LinkedIn feed has been flooded with an interesting AI image generation trend: educators generating action figures of themselves using ChatGPT’s upgraded image tools. It’s a fascinating way of turning your career, personality, and values into a visual micro-narrative, something like a professional Pokémon card (but I already did that as part of my poster presentation for the Herschel Programme for Women in Technical Leadership. Shortly before the days of AI when all I had was Photoshop and imagination. Remember that?)

Naturally, I wanted to play too. But instead of uploading a photo of myself and prompting ChatGPT to create a stylised version, I decided to flip the concept entirely.

What would happen if I didn’t give the AI an image to work from, just based it on pre-existing information? What if I asked ChatGPT:

“Based on what you know about Beth, what do you think I look like?”

If you’re new to using ChatGPT, and have very limited personal data freely available on the internet, I would imagine this would be tricky for the AI. But what if you had your own custom GPT based on a lot of professional knowledge about you? With the amount of information about myself that I had fed into my personal chatbot (MechBeth), I thought it could make a more reasonable assumption on my appearance.

Who (or what) is MechaBeth?

MechaBeth is a custom GPT chatbot I created, built with detailed information about my teaching, research, clinical work, inclusive practice, and AI experimentation. It's a digital echo of me. I initially created MechaBeth as a basis for possible virtual patients, but I later realised how impactful it could be from an employability standpoint. I go into more detail on its creation in another post, but if you imagine having a chatbot that could answer potential interview questions based on your own experience and expertise, you can see how this technology could benefit our recent graduates. Over time, MechaBeth has become a pretty accurate reflection of my professional voice and values. So, I was curious: could that information shape not just what I say, but how I might appear to the AI?

From Data to Visual Identity

Without uploading an actual image of myself (more on why not below), I asked ChatGPT to generate what it imagined I looked like—based solely on my employment history, writing style, research areas, and values embedded in MechaBeth.

The result? Intriguing.

The image initially showed someone slightly androgynous, with multi-coloured hair and a confident stance, wearing a futuristic blend of clinical gear and tech accessories reflecting my love of all things Dental and technological. In both versions, the figure wore glasses, if we’re looking at stereotyping here, did it assume that my love of tech made me a nerd? It is kind spot-on though. The action figure was particularly cool; weilding an AI-augmented arm, looking like a hybridisation of Thanos and a Dental Nurse. A literal manifestation of how entwined my work is with emerging technology. It felt like a manifestation of MechaBeth in the best way possible.

This playful experiment ties back beautifully to teaching identity theory. As educators, we constantly navigate between our authentic selves and the "versions" of ourselves that exist in virtual spaces as seen in Lave & Wenger’s situated identity, or Goffman’s presentation of self in the digital age. Creating a visual version of yourself from language alone pushes that idea even further, what would your professional footprint look like if it took physical form? And in my case, apparently, it includes glasses, teeth, holograms and tech-enhanced limbs. I’ll take it.

Let’s Talk About GDPR & Data Ethics

While this experiment was fun, it’s also a good time to pause and talk about data protection and AI image generation, especially for educators. If you’re considering uploading a photo of yourself into an image generator like DALL·E, Midjourney, or Ideogram, here are some things to keep in mind:

  • Once uploaded, your image may be stored or used to train models, depending on the platform’s policies. That has serious implications under GDPR, which protects identifiable personal data, including specific facial features.

  • Even if a platform says it doesn’t store images, transparency is limited. What happens to your biometric data behind the scenes isn’t always clear.

  • There’s currently no right to deletion or auditability in many generative platforms, meaning you may lose control over how your image is replicated or used.

So instead of uploading your face, why not try generating an AI version of yourself based on text only? It’s safer, often just as insightful, and sparks a whole different kind of reflection on your professional identity.

But that being said, I’m a sucker for AI fun and my Linkedin picture is available on the internet so here are some creative interpretations of that:

My new range of profile pictures from ChatGPT

Try It Yourself (Safely)

You don’t need to upload anything to try this. Instead:

  1. Write a short paragraph about your professional self (or use your custom GPT!).

  2. Prompt ChatGPT with:

    “Based on this description, what do you imagine I might look like?”

  3. Check out your image based on that conceptual identity.

To spice things up a little more, ask someone else to describe you without mentioning specific physical features. Pop that in as the prompt and see how you are reflected in the eyes of others. You’ll be amazed at what the algorithm reflects back. And maybe a little inspired, too.

Previous
Previous

Channel-Hopping Revision