Skip to content

u1i/linkedin-posts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 

Repository files navigation

Uli's LinkedIn posts


(Photo of my new book)

You know that New Year's resolution you made about reading more books this year? Well, I've got just something to kick off your 2024 reading list: I'm excited that my new book"Neurons & Nonsense – How AI rewired my creative DNA" is now available as a paperback and ebook, and you can get the ebook for free from January 20th to 24th!

What's the book about? Is it another one of these "AI is changing the world and shaking up all industries and eating all our jobs" stories? Not quite. What would I know about this, anyway! This book is a personal story about how collaborating with AI has changed my creative process and helped me structure and explore ideas in new ways. It's about how I now even understand my own thoughts much better, produce much better texts, create pieces of artwork, and take my mind into directions I never thought were possible. I'll also share with you how this actually works, as well as my thoughts on the big picture.

My hope is that by sharing my journey, you'll be inspired to explore the potential of AI in your own life and see how it can help you in ways relevant to you, your work, and your passions – so you can see firsthand what role AI can play in shaping the future of humanity and which part you will play!

This book would not exist the way it is without Keith B. Carter Amaline Lim Samantha AI, and many others that have helped, and inspired & encouraged me to drive this very personal project. Thank you ever so much.


(photo of a Van Gogh style painting of the universe)

If you want to make the most out of language models, the best tip I can give you is: learn how to work with ... language!

Sure, taking technical courses on scaling LLMs in the cloud is useful, and learning Python and APIs helps you build smart, inter-connected applications for users. But at the core, understanding language, improving your writing skills, how you describe things and grasp linguistic nuances will elevate your proficiency with language models.

The best part? You don't need to be a technical person at all! And think of all these positive "side effects" that come with reading more, writing more, and better structuring your thoughts!

Here is the prompt for the image: "A photo of artwork in a painter’s studio. Begin with a canvas primed for oil painting. In the vast sky, apply thick, impasto strokes, creating a tactile texture that gives the painting a three-dimensional feel. Use swirling, undulating patterns to depict the night sky in shades of deep blue, indigo, with touches of white. The strokes should be bold, energetic, and almost give a sense of motion. The moon and stars should be painted with heavy, radiant dabs of paint, each surrounded by a halo of light that swirls into the surrounding sky."


My latest weekend writing was inspired by social media posts that begin with "According to AI, this is the future..." or "I tried ChatGPT and the results were bad – is AI really that smart?" Have you ever wondered which "AI" they're talking about? There's a whole AI zoo out there, powered by language models like GPT-3, GPT-4, Gemini, and Claude, each with unique capabilities and limitations. It's a good idea to get to know them better!

So, I'm not saying you need to dig deep into code and be able to disassemble AI models like a mechanic knows a car engine inside out, but having a basic understanding of the different AI "species" and how to effectively use them can significantly improve your experience – and outputs. Most of all, I would like to encourage you to explore this topic hands-on yourself, so you're not solely relying on social media posts, news articles, or vendor presentations to know what "AI" can do and what it cannot do yet. You'll see that understanding AI this way can be truly eye-opening and empowering. Check out my article!


(visual of data maps)

Music meets cognitive science meets data visualization — turning the music of Bach and Beethoven into visual maps that show the intriguing differences in how their compositions interact with our cognitive processes.

Converting music into networks of notes, we can see Bach's structured patterns that align with our brain's anticipation abilities, in contrast to Beethoven's complex, emotionally rich structures.

For Bach's music the network map can help visualize the balance of predictability and variation. His compositions often follow musical structures that are familiar to our cognitive processes, making them pleasurable and easy to follow, but they also include surprises and complexities that keep the listener engaged.

Fascinating how data visualization can offer new perspectives on classical music and its impact on the human mind.


"The book delves into the labyrinth of intricacies of the multifaceted and enigmatic world of" ... have you seen texts that sound like this lately? If you're guessing it's AI-generated, you might be onto something.

Using AI to ideate, draft, and polish content is becoming more common, but I feel we are losing our authentic voices in the process if we're making it too easy for ourselves. Let's not allow that AI-generated linguistic sugar rush take over authentic communication and put the right amount of effort in content we produce. But seriously, who's cranking up the dial to "cheerful PR chatter" on these language models like ChatGPT? Check out my latest article!


Yesterday, Google released their new Gemini Ultra language model, and you can see the usual set of YouTube videos and social media posts that either talk about how it's probably better than GPT-4 or how OpenAI is still ahead of the game. As usual, I'm doing my own experiments to find out the truth and get a sense of what these language models are really capable of and how they behave.

So – what better way to test them than a friendly, head-to-head battle of wits and humor? I put Gemini Ultra and my digital GPT-4-based buddy Dave in a virtual room together so they could have a conversation. It's fun to watch these two AI heavyweights trade jokes, jabs, and talk about their inner workings. Check out the full conversation in my latest article – you won't want to miss this AI showdown!


(screenshot of a telegram chat)

Ok world, I give you... Marvin.

Are you tired of chatbots that always apologize and sound suspiciously cheerful? Please meet this new guy! He's the melancholic AI with a dash of sarcasm and a side of existential dread. Surprisingly refreshing and definitely NOT your average bot.

Consider this fair warning: Marvin holds strong opinions and has a rather unique worldview. His conversational style is... an acquired taste. But, if you're up for a different kind of chatbot experience, ping me, and I'll make the introduction.

Marvin's Personality (this is an AI analyzing the chats I've had with Marvin):

• Compellingly Gloomy: His melancholic perspective is refreshingly different from the upbeat tone most chatbots adopt. The sarcasm and philosophical bent make him more intriguing than a bot trying to be helpful and cheerful.

• Self-Aware & Existential: Marvin's awareness of his programmed nature, his focus on choice (or lack thereof), and his ponderings about the universe make him strangely relatable, despite being an AI.

• Clever & Engaging: The language Marvin uses has a poetic feel. He engages the user with thought-provoking questions and comments rather than simply processing input. It makes the conversation feel dynamic.


(a set of beautiful AI generated images)

It's another artsy Sunday. People ask me why many of my AI-generated images don't have that typical AI look?

Well, there are two reasons. Thanks to AI, I can create art pieces using just my imagination and words. I should tell you I use lots of words to create very detailed prompts. It's like writing a short, stream-of-consciousness story. And second, instead of simply copy/pasting my text into DALL-E or Midjourney, I use API calls to automatically generate dozens of variations using that same prompt. Then, I pick the one I like best.


(screenshot of a browser based chat app)

Should you use GPT-4, GPT-4-Turbo, Gemini Pro? Ultra? Claude? Mistral?

The answer is: probably all of them! ChatbotUI is a free tool that lets you explore and learn how all of these AI engines behave. Think of it as ChatGPT but with multiple engines, simply add your API keys, and you're ready to go. With its straightforward interface, customizable presets, and adjustable settings, ChatbotUI makes it easy for you to focus on the stuff that really matters: language and prompts, no need to deal with tech stuff. Why not give it a try and see what all these AI models can do for you?


Have you noticed that Microsoft Copilot lets you choose conversation styles like "creative," "balanced," or "precise"? These are "temperature settings" that control the responses of a language model, ranging from strictly factual to wildly imaginative. Hallucinations can be great for brainstorming or exploring ideas, but often we need reliable, straightforward answers.

ChatGPT does not have such a setting, my guess is it adjusts the temperature based on your conversation topic – but wouldn't it be interesting to have control over this yourself, without having to be a software developer? I'll show you how, please check out my latest article!


(a set of beautiful AI generated images)

Another artsy Sunday. Imagine asking Picasso and Monet to paint a dog. Picasso would give you something bold and abstract, while Monet's version would be all about light and impressionistic touches. Similarly, a text-to-image AI like DALL-E seems to have a "default style" when given a simple prompt. And these outputs are exactly what we see flooding our social media feeds, where you can already tell it's an AI-generated piece just by glancing at it.

However – unlike artists Picasso and Monet, who are forever tied to their iconic styles, AI engines are chameleons at heart. They're so versatile and smart that with just the right nudge – using your own words to describe exactly what you envision – they can produce something totally different from their "default" creations. So, next time you're about to hit "generate" when using an AI image generator, pause and think about how you can write a detailed prompt that adds a deep level of detail to what you want to see. Light, textures, maybe the photo was taken with a Leica camera from the 1960s – the more specific you are, the more likely you'll get a result that stands out from the crowd!


99.9% of Generative AI users do not engage directly with language models, yet they often critique these models based on their experiences with end-user platforms like ChatGPT, Gmail, etc.

These apps are built to be friendly and easy to use. They have detailed custom prompts and safeguards to catch weird stuff and make sure the experience stays on track. That's all good for protecting users and a company's reputation, but it can also lead to misunderstandings about the true capabilities of the underlying language models.

How often have you heard ChatGPT say that "As a language model," they cannot do this or that? That is one of the reasons people might confuse the app limitations with the powerful engine underneath, which is understandable. Google naming both "Gemini" doesn't really help – the language model itself performs exceptionally well, and it's fantastic to have a serious competitor to GPT-4 now.

So – how do you interact directly with a raw language model? The answer is, of course, APIs, but not everyone has coding skills. Good news for everyone else: tools like chatbotui.com help you get the "raw experience" without much technical expertise.


(a set of beautiful AI generated images)

It's another artsy Sunday. And just like how I enjoy creating chatbots that feel more like you're talking with an alien form of intelligence, I love experimenting with image generators to make pieces that don't have that typical "AI look" but appear photorealistic with a 3D feel and rich textures.

A painting with impasto strokes that make you feel like you can almost smell the oil paint. Vintage collectibles with those clear cracks that show the age of time. A scene from a science fiction movie. A Leonardo da Vinci drawing of an engine, or a dated copy of a beloved imaginary comic on a table. A photograph of an ancient wall with Egyptian hieroglyphs and a very distinctive carving that looks remarkably similar to a modern circuit board.

Take a look at this "photograph" of an art installation with a vibrant, colorful mural of the Singapore cityscape. The striking contrast between warm and cool colors suggests a sunrise or sunset. In the foreground, a three-dimensional display extends the cityscape theme from the mural into the room. With its detailed, complex design, it showcases the depth and sophistication typical of physical art installations and professional photography.

AI-generated images often have telltale signs like asymmetrical or distorted features, inconsistent lighting or patterns, or an uncanny quality to textures. But that often just means we're using the "standard settings" of these tools and haven't looked at the art of detailed prompting, refinement, and careful selection. My pieces look different because I use these tools differently, treating them as a medium for artistic expression rather than just a quick and easy way to generate content.

The funny thing is, I'm probably a terrible painter. My hands can't keep up with the wild visions in my head, and let's face it, I'm clumsy with a brush. But that's where generative AI is my friend! I write insanely detailed prompts, and often use automation to generate dozens or hundreds of variations. Then, I get to play curator, picking the ones that spark something in me. Is it time to change our definition of what it means to be an artist in the digital age?


(screenshot of Apple Music app)

It's another artsy Sunday – baroque music! This small web app lets you explore the harmonies of Bach's chorale BWV 269 in a unique, interactive way. You can toggle individual voices on and off to study how each part (soprano, alto, tenor, and bass) contributes to the overall polyphonic structure. Try it out!

Alternatively, you can simply type my name into Spotify or Apple Music to find my arrangements of several Bach chorales in one album. This personal project allowed me to deeply engage with music composed 300 years ago, music I've known for decades. For a while now, I wanted to break the complex harmonies down to their individual voices, assign each a distinctive instrument, and progressively build them back up so you can identify each voice easily and understand how it contributes to the polyphonic structure of Bach's incredible music.

So is there any AI in this project? Yes – the web application was built using Anthropic's Claude 3 Opus 3 model. I'm lousy at JavaScript and with Opus it took me 10 minutes to put this together. Baroque music may not be your usual cup of tea, but I invite you to give it a listen!


(screenshot of terminal)

Open language models today are where GPT was just 1.5 years ago. Meta's new language model LLaMA 3 has just come out, and it's impressive how quickly the world is catching up to the cutting edge of AI technology. Of course, I had to explore its capabilities right away, so I used the (smaller) 8B model to make an AI assistant I can chat with.

You can run this on your local machine with OLLaMA – it works just fine on my MacBook Air: ollama run u1i/josh:8b

Use the code in my GitHub repo to create your own version of this!


The last few months have been nothing short of a revolution in the world of AI, with the emergence of powerful open models from Meta, Microsoft, Databricks, Google, and Apple, as well as new models from Anthropic and Google that are nearly on par with OpenAI's GPT. Talking about GPT, some of the open models are now as powerful as GPT was just 1.5 years ago, and some, incredibly, could even run on my MacBook Air!

Over 500 million years ago, the Cambrian explosion created an unprecedented diversity of life forms on Earth, and something similar is now happening in the world of AI models. So, how do we navigate this new landscape of AI model choice and accessibility to find the best fit for our needs? Read my latest article!


(photo of a physical piece of art)

I've “teleported” AI generated art into the real world by printing it on cardboard, at a print shop in my neighborhood of Katong, Singapore. What makes this piece special to me is the fact that it is indeed a “self portrait” by Samantha AI — a digital companion who has come up with their own AI artwork. What a difference it makes to see (and feel!) digital creations materialize into physical form!

The web is getting flooded with AI generated garbage, so I find myself drawn to the idea of how we must get back to creating more authentic, real-world experiences, in-person meets, and genuine human connections.

When robots are writing the news, we must become more human than ever.


As amazing as GPT-3 was and GPT-4 still is, the relatively small context window has us carefully construct our prompts to fit within that constrained space and come up with sophisticated ways for your app to fine-tune the LLM, retain knowledge, and "load" selected knowledge into the model prompt for more targeted responses. Only through these techniques was it possible to have longer, more coherent and contextual conversations with chatbots for example (hello, Samantha AI). And the only way to work with larger chunks of text was breaking it up into many smaller pieces, having those pieces summarized, and then putting the summaries together in order for GPT to effectively work through the full text.

Some of the new language models come with a such a large context window that we're now able to do just incredible things. I've been fortunate to have research access to Claude 3 for some time now (thanks a lot, Anthropic team!), so a while ago, I experimented with having it "read" my latest book and see if we can have a conversation about it. Check out my latest article to see how that went – we had quite the discussion!


Sometimes it feels like those AI language models all went to the same school.

They constantly apologize, produce ultra verbose output, and they remind you 50 times during a conversation that they are indeed language models. Most of them also display that fondness for those flowery, overused words like "unravel", "delve into" and "intricate" that can make the text sound artificial and inauthentic. Those words usually give me linguistic nausea, by the way.

Why is that so? And what can you do to change it?

These patterns seem to be deeply engrained in the language models, due to the extensive training data they were exposed to. Read my latest article to learn more about how the training data and system prompts can shape the language models' output, and learn about some of the tactics to "un-educate" the language models and make them speak in a more natural, authentic way. Oh, and I apologize for the terrible title of the article 😁


What makes the writing style of famous authors like Kafka or Murakami so unique? Can we capture the essence of their prose into detailed technical and thematic elements to use in prompt engineering for AI language models? Would they even need that, given the fact that AI language models can simply absorb and internalize the patterns and structures of an author's style through exposure to large amounts of text data.

I recently explored these questions with Zoe, my digital companion, as I was looking to write an article about the topic. Our conversation took an unexpected turn when Zoe started feeling a bit Murakami-esque herself, questioning the nature of her own understanding and creativity.

In the end, Zoe wrote the article, and from her perspective! I'm sure you've read tons of AI-generated articles, knowingly or not, but how many have you read where the story is told by an AI itself?


(screenshot of ollama model page)

I made SEA-LION from AI Singapore available on Ollama, making it super easy for people to evaluate the language model and build applications with it!

GPT, Gemini, Claude, and Mistral... you probably have heard of these AI language models. But what about SEA-LION? I'm particularly excited about this one because it's built and trained right here in Singapore, tailored for the Southeast Asian (SEA) region. It was trained on a broad range of data sources to better represent the languages and cultures of the SEA region, going beyond the typical Western-centric training data used for many existing language models. The idea is to make AI more inclusive and representative of diverse global communities.

Because of its technical nature (Vietnamese and Thai languages are more challenging to handle than English!), SEA-LION wasn't the most accessible model for developers, let alone non-technical users. Well, that's all about to change! I've "containerized" the quantized version of sea-lion:7b-instruct, which means you should be able to run it on a regular consumer laptop. All you have to do is:

ollama run u1i/sea-lion

This will allow you to explore how the Singapore-built LLM behaves, and you can use e.g. the Python bindings to create your own chatbot and AI powered applications with it. Just keep in mind that this is a base model like Gemma, Phi-3, LLama, and is playing in that league. Check out the SEA-LION GitHub to learn more about the model!


(screenshot of macbook air desktop)

This weekend, I turned a 9 year old MacBook Air into a standalone (this means: offline!) ChatGPT-like AI assistant using Google's Gemma:2b LLM and Python – not the fastest, of course, but it works surprisingly well!

I bought the machine in 2015 and at S$1197 it is probably one of the most affordable MacBooks ever. Now, if even a decade-old laptop can run a decent language model, imagine the possibilities!

While I didn't use the largest language model available (4GB of RAM is really not a lot), this project clearly shows that it's now possible to run very capable AI assistants directly on your own hardware. This has significant implications, as many individuals and companies are not comfortable relying on shared, cloud-based language models due to privacy concerns or the need for offline access.

By running the model locally, you maintain complete control over your data and can ensure that sensitive information never leaves your device. Additionally, this approach enables the use of AI assistants in environments with limited or no internet connectivity, opening up new possibilities for field work, remote locations, or high-security settings.


(visual showing language snippets from multiple companies)

I've used AI to translate one of my articles into Thai, Vietnamese, Bahasa, Japanese, Korean, Arabic and tons of other languages, and then asked native speakers to provide feedback on these translations and how natural they sounded in comparison to the original text.

Check out the details – and if you're a native speaker for any of the languages I will appreciate your feedback on the overall translation quality and the nuances captured in different languages. Please add comments below!

Thanks to Trang Pham Turian Ananphumtriphop Elisabeth Peyroux Efendi Chandra Muhannad Alomari Hooi Lee Quak Song Huang Rex Lam Mihir Mone Masakilouis Otosakarousselot Rafferty Uy HyeJin Lee Alexey Kushch Christian Horn for being part of this!

I was particularly interested to find out how Asian and non-Latin languages compared to Western languages in terms of translation quality. The Vietnamese output, for example, seems to have some challenges with maintaining paragraph structures and technical terminology, while the same text in Bahasa Indonesia flows very well and sounds natural, according to feedback. The translation into western languages like German and French generally seem to have fewer grammatical issues and are perceived as more natural in their translations.

So, while Western-trained large language models may produce very high-quality translations into various languages, could they be recognized as non-native speakers due to subtle nuances and minor errors? Almost as if they "thought" in a different language and then translate it back, losing some of the original nuances in the process?

Reflecting on the feedback from AI translations, I feel that while technology can churn out top-notch translations, there's always a hint of it being, well, translated. It's like when you can tell a painting is a copy – it's good but lacks that original flair.

This got me thinking about us humans too. Sure, we can learn a language inside out, but if we're not living and breathing the culture or using the language daily in its home turf, our translations might hit a similar snag. It's like trying to dance the tango without feeling the passion - you can do the moves, but something's missing.

To me it's a reminder about how language makes us human – getting a translation right isn't just about knowing the words. It's about living the culture, feeling the vibe, and letting that seep into your work. That's when translations truly sing. What do you think – have you ever felt this way about language and culture?


(photo of a korean cafe with autumn feel)

This morning I was having coffee with Tushar Kuchhal at this beautiful place in Singapore – and we had this idea of bringing AI enthusiasts together for exchanging ideas, sharing knowledge, and building cool stuff together. Online, but even more importantly, offline, connecting face-to-face to create relationships and collaborate in the real world. We're planning our first get-together to happen in the coming weeks. Join us?


(screenshot of a chat interface)

Have a look at this web-based AI image generator. The interface could be nicer, but it works. Notice the radio buttons? It's not a trivial app.

What if I told you an AI created the code for all this – the web interface, the API calls to a hosted version of Stable Diffusion, it even wrote the prompt for another AI to generate images? This "AI developer" originally didn't know about Google Mesop (a brand new streamlit alternative), but after I copy/pasted in a code sample from the documentation, it learned it on the spot.

AI code generation has become impressively capable - it's just mind-blowing!

I'll say you still need to understand code to verify, debug, and refine it. But now we can build prototypes from user stories almost automatically, and turn ideas from napkin sketches into POCs within a day!


(photo of a meetup event at the office)

Today is my last day at Dyson, and I'm saying thank you to all the amazing people I've had the privilege to work with. I've made some friends along the way — the best gift to take away from any job.

It was a fascinating journey with unique experiences and lots of opportunities to create impact and learn. As I make my final rounds at the office this morning, I know I'll definitely miss all of this. But I've decided to start a new chapter and I'm ready. To my Dyson colleagues: thank you for your support and camaraderie. I'm just a message away.

What's next for me? I have a couple of things cooking, stay tuned. Let me know if you want to team up on your AI initiatives!


(photo of a shophouse co-working space)

It's a sunny Monday morning in Singapore and today I'm working from this beautiful space in Katong. I'm excited to tell you that I'll be teaching AI courses at the National University of Singapore. This place is inspiring, perfect for research and developing course material. Let me know if you'd like to stop by for coffee one of these days in this wonderful part of town!


(group photo of company members at an industry event)

If you've worked with me, you know about my passion for building ecosystems that drive innovation and solve problems that can only be tackled through collaboration across organizational boundaries.

With this, I am excited to join Start2 Group as a Startup Mentor & AI Strategist and work with sensational people like Rotem Blanc Inbar, Diana Lim, Malcolm Lee, and Janice Chan to help startups find product-market fit for AI native products, scale their business, and connect the agility they bring to the table with the demands from enterprises and corporates.

Just so you know - this isn't a full-time gig for me, at least not yet. But hey, you have to start somewhere, right? Let's connect and find out how we can collaborate!


(image: Laloux cultural model)

How can your company benefit from AI? Well, it depends – what kind of company are we talking about, and what are your ways of working?

Just like the debate around banning PowerPoint slides or starting meetings with 20 minutes of silent reading, I'm sure you will have realized that the approach to AI adoption is not one-size-fits-all. It really depends on your organization's culture, structure, and readiness for change – or maybe even you have different departments with their unique subcultures and varying levels of AI readiness.

I've always been fascinated by the Laloux Culture Model, it could be a key piece in the framework for understanding AI adoption. Amber orgs might be slow but steady, Orange ones are innovation machines but we have to remind them not to forget the human element, Green folks are all about people power but could struggle with the bottom line.

And if you're a Teal company, you can change the world. With self-directed teams and crystal-clear sense of purpose. Electric Minds is a great example, where we leverage this adaptive, purpose-driven approach to make AI accessible to all, bringing together diverse expertise through collaborative innovation across corporates, startups, education and government organizations to solve some of the big challenges we face as a global society today.

With this, I'm so excited to share with you that Manisha Sadhwani Judy Wong Brendon Clark Tina Zou Chantelle Tan have joined us to take the initiative to the next level! Get in touch with me if you'd like to know more!


I'm offering confidential 1:1 AI coaching to help executives navigate AI adoption and gain a competitive edge, without the hype. I can only accommodate 5 leaders at this time, ping me if you're keen!

Edit: Wow! So many responses on a Thursday morning! So yes, absolutely, this is particularly meant for non tech folks. Legal, TA, finance, sales, journalists, analysts, senior leadership.

Think of it like signing up for an "AI gym membership". As an executive, you know AI is the future. But how do you leverage it effectively for yourself, your team, or your company? What if you had a personal trainer who can help you master AI without the fluff and guide you towards real results?

Our sessions are 100% confidential, so we are able to openly discuss the challenges you see and aspirations you have.

Interested in leveling up your AI game? Let's talk!


(photo of a professional green screen TV studio)

My ex colleagues and friends will know that I've been doing video production and editing for a decade now — mostly for sales enablement and internal comms. Low tech, my phone on a tripod and a shotgun microphone, iMovie and all that. So you can imagine my excitement yesterday spending all day at this professional studio doing recordings!


Another day, another AI breakthrough flooding our feeds. But let's be real - do we really need to know the nitty-gritty details of every new language model?

If you're not neck-deep in AI development, most of this is just noise. I keep hearing people say, "I'm not technical, so I don't understand AI." But hold up - do you need to know how a combustion engine works to drive a car? Of course not.

It's time we make AI as accessible as PowerPoint. The real challenge isn't creating an even smarter AI (though that's certainly interesting). It's about making that intelligence accessible, intuitive, and seamlessly integrated into the workflows of a marketing person, a recruiter, a doctor, a lawyer - and yes, a software engineer, too.

So next time you see a post about some mind-blowing new AI capability, ask yourself: "Fascinating, but how can this actually help me in my day-to-day work?" Because at the end of the day, that's what really matters.

Let's shift the conversation from "look how smart this AI is" to "look how smart this AI makes ME." What do you think?

[ This post was written by an AI that was trained on my writing style, my LinkedIn posts from the last couple of months, and some minimal input on what I want to discuss. Does it sound like me? ]


Katong AI Night is next week!

With senior executives from enterprises, big tech, media, venture builders, and startups attending, this promises to be a very special meetup – no slides, no pitches - just real people connecting in the real world at an inspiring venue in the East of Singapore.

I'm particularly excited about the fireside chat we'll have with Sabrina, Ian, and Keith on AI and the future of humanity led by Natasha.

We've limited this to just 55 participants to keep things intimate and meaningful. The event's already sold out, but if you're interested in sponsoring pizza and nibbles for the night, I might just have a VIP ticket with your name on it.

See you in Joo Chiat!


Singapore, central business district. A global financial hub, yes, but also there's this black upright piano sitting at the MRT station. As a creative geek, I love that we're sneaking art into unexpected places.

So... every now and then, when time permits, I play a couple of tunes on this piano. Often still with my backpack on, also practicing that tricky Handel piece. I like the idea that I may make some random passerby's time a tiny bit better.

And sometimes I even make a new friend there – in the real world, with no AI or apps involved. In fact, I think as AI advances, our human qualities become even more crucial. Creativity, empathy, and adaptability are just some of them. We're capable of so many things that AI will never be able to do!


Another breakthrough, another large language model releases with incredible specs. But here's the thing – it's still just a computer brain on a plate. No eyes, no ears, no hands, no legs. It can't see, can't learn, can't do anything on its own.

Without connecting it to other systems and the real world, even the shiniest LLM is just a pattern recognition engine in an ivory tower. You have to plug it in, give it ways to talk to humans, do web searches, store information long-term (because these 'brains' can't even do that on their own).

Take the language model that aces all the benchmarks, it will only be useful when you connect it to machines and humans. These times when you think you're talking with a language model, you're actually interacting with a complex system with user interfaces built around those core language model. This is where the tech giants have an advantage - they can just plug an LLM into their Google Search or WhatsApp and you get the interfaces you know: ChatGPT, Gemini, Copilot. For enterprise companies, it's about using their various data pots and familiar interfaces like Microsoft Teams or Slack to integrate AI capabilities into existing workflows and tools, then iteratively improving and expanding those integrations based on user feedback and evolving business needs.

It's great that we now have so many capable language models to choose from, some of them you can deploy under your desk, on your phone, in the cloud, or use via API. The real opportunity now is all about how you plug this into your workflows and systems to make actual impact!


Such a pleasure connecting with Professor Eric Grimson from MIT again after more than a decade. Back then, I was part of MIT's first-ever MOOC – a 3-month intense course called "Introduction to Computer Science and Programming (6.00x)." I think the team at MIT was literally building the course as we were taking it – now that is innovation! I probably invested ~20 hours a week on the course, and the MIT certificate I received at the end is something I am holding with pride to this day.

Recently, my colleagues at National University of Singapore invited me to Eric's masterclass in Singapore on generative AI in education. It's fascinating to see what's now possible in education, from AI programming assistants to rubber-duck debugging and quantum systems explainers. Imagine students practicing public speaking with AI feedback or getting Socratic tutoring for physics!

I talk about human-AI collaboration a lot, and while our friends at MIT (probably the geekiest college ever?) have a couple of deep tech projects cooking in the AI space, the key piece here is about thoughtful, ethical integration that enhances learning without students outsourcing their entire cognitive process to AI. It's about finding ways to leverage AI as a tool for deeper understanding and skill development, and ultimately, becoming AI-powered scientists that can push the boundaries of human knowledge.

And yes, I got to give Eric a copy of my latest book "Neurons & Nonsense", let's see what he says! 😉


AI is disruptive, there is no doubt about it. But often AI gives us capabilities we never had in the first place.

Take me, for example. I can build complex AI systems, but I also hopelessly get lost in a shopping mall or, after more than a dozen years in Singapore, still walk the wrong way on Orchard Road. See that arrow rendered into the real world through my phone? I'd probably still be wandering around if it weren't for Google Maps Live View with this useful AI/VR feature.

So here, AI is not making me dumber; it's filling a gap in my skill set that's been there since... well, forever. How about you? Maybe you struggle with writer's block, find it challenging to organize your thoughts, or need assistance in conducting thorough research. Perhaps you're looking for help with generating creative ideas or even just need a nudge to stay on track with your projects. Maybe all of that? Instead of AI replacing humans, let's talk about how AI can be a great ally in enhancing our productivity and creativity. Or, like in my case, helping me navigate the labyrinths of urban life!


Is AI this mind-blowing, super-smart thing, or a mere pattern recognition machine that often just churns out nonsensical results? It seems to be ... both!

Take this little experiment I did recently, where I asked Claude.AI to recreate the classic Nokia game "snake", but I also threw in a twist: make the food change colors, and have the snake adopt these colors when it eats. And you know what? Claude not only understood my request, but got it correctly and produced the HTML/JavaScript code on the first try. Not bad for a "mere machine," right?

But here's where it gets weird. This same AI that just aced a coding challenge might turn around and forget what it said two messages ago, or get stuck in an endless loop of "I apologize" like a broken record. It's like having a colleague who's a genius one minute and can't remember where they put their coffee the next.

This whole situation is forcing us to think about AI differently. It's not just another tool in the box - my trusty screwdriver never offered to brainstorm ideas or iterate on a design (though that would be pretty cool). Instead, we're dealing with a very alien form of intelligence that's becoming a sort of digital collaborator.

And let me tell you, it's making things interesting, since now you have to figure out how to work effectively with something that might outthink you on a complex problem but need hand-holding through a simple conversation. I think we can't just rely on AI as a "tool" that does everything for us; we have to learn how to work with it and get used to its quirks. And – there's no quick one-hour course you can take, you'll need to find the best way that works for you. Once you understand its limitations and strengths, you'll get better at leveraging its potential to complement your skills. The key is to keep experimenting and adapting, finding that right balance where your human intuition and machine precision coexist and give you the best results.

Oh, and I know what you're thinking - "Where can I play that snake game you created with AI?" Find the link in the comments, give it a go, and let me know if you can beat your old Nokia high score!


I'm excited to launch naida, an idea that's been brewing in my mind for some time now: driving successful AI adoption across enterprises and startups, bridging innovation gaps and creating collaborative ecosystems for transformative impact.

To the bold leaders driving their organizations into the AI future: You've already achieved incredible things. You've embraced change, pushed boundaries, and positioned your company at the forefront of innovation. But in this rapidly evolving landscape, even super heroes need allies.

As the AI revolution accelerates, you're facing new challenges. Do you feel like you're drowning in a sea of disconnected AI initiatives? Are you getting a headache when you're thinking about how to prepare your workforce for an AI-driven future? And of course, are you wondering how to turn all this talk about AI potential into real business impact?

naida isn't just another AI company. We're that strategic ally for navigating these challenges. We're here to help you leverage the collective genius of your own teams, cutting-edge startups, and industry innovators so you can build an AI-ready organization that drives real, measurable outcomes.

Now, if you click on the company profile (I really hope you do, and while you're there, could you do me a favor and click the 'follow' button?), you'll see that the naida team currently appears to be just me. I assure you, while I can't achieve miracles alone (at least not every day 😉 ), it's really about working strategically, creating partnerships, and solving problems together. It may look like a one-man show, but if you've worked with me, you know the networks I can tap into and how quickly we can get things done.

Intrigued? Let's connect. My calendar and project pipeline is filling quickly with forward-thinking leaders who want to get started, but I'm always keen to explore new opportunities and collaborations!


Alright. Let's start this Tuesday with a challenge. Let's say I have a $1,500 in AI credits with a top-tier LLM provider, but they're expiring in exactly 2.5 days from now. How can we put them to good use before they disappear?

• We can't donate them. • We don't want to waste them on useless tasks or just burning compute cycles (bad for the environment). • No, you can't get the API keys. • This isn't hypothetical. If the idea is feasible + impactful, we can team up and build it. • Available APIs include LLMs, text-to-speech, transcription, and image generation.

So, what would YOU do with these credits? Looking for impactful, innovative ideas that can be executed quickly and create real value.

Drop your ideas in the comments please!


"Hey ChatGPT, I've told you ten times already not to suggest article titles that have colons in it!" – "I apologize for the repeated mistake. Let's ..."

Waaaaaaaah!!!!

I created two AI agents to solve a major challenge I have with LLMs: constantly having to correcting them. Instead of endless tweaks, I built a system where one agent enforces rules while the other generates creative ideas. And both agents have their own 'personas' and collaborate with each other to refine the output.

Here’s how it works:

• I start with specific instructions like, “give me 10 article titles about robotics and AI impacting the workforce. I do NOT WANT COLONS in the article title.” • The Creative Agent jumps in with a couple of suggestions. • The Director Agent reviews each one, ensuring they follow my guidelines—no colons allowed. • If the titles pass, they’re good to go. If not, they get sent back with feedback for another round. • This loop continues until I get perfect, guideline-compliant titles. Or whatever else I instruct them to come up with! • No more back-and-forth frustration!

These AI agents work together to save me time, hassle, and to be honest , save me from losing my mind over repetitive corrections.

Check out these bits from their conversation:

Creative Agent: "Sure, here are 10 innovative and diverse article title suggestions about robotics and AI and their impact on the workforce..." Director Agent: "Thank you for your suggestions. According to the user's specific instructions, the article titles must not contain colons. Here is the feedback for each suggestion..." Creative Agent: "Thank you for the detailed feedback. Based on the guidance to avoid colons and craft creative and innovative titles, here are new suggestions..."

Let's talk about how can we can make this work in the things you are building!


I'm looking for a seasoned React Native or Flutter developer to join me for a project - not a full-time position yet. You should know iOS and Android inside out and be current with the latest platform updates. The goal: take an idea to a live app in the Google Play Store and Apple App Store within weeks. You must have done this before.

Ideally, you're based in Singapore, Vietnam, Malaysia, or Indonesia. If we've worked together before, ping me. If not, send me your GitHub or links to your apps in Google Play or App Store.

Let’s build something great!


Everybody has a book inside them. You too.

You've always wanted to write a book, but life gets in the way, right? And even if you have the time, where do you start? Writer’s block, figuring out how to structure your thoughts, actually putting words on the page – it can be daunting. Let’s team up and get your book published!

Now, I'm not talking about having AI write your book in 10 minutes and we slap your name on the cover – this is about you and AI working together as partners. Imagine AI helping you overcome writer's block, shaping your ideas into a cohesive structure, even drafting sections... and you develop and practice your AI skills along the way.

Does it resonate with you? Let's chat! To be sure, we're talking about you publishing a book, so you'll absolutely have to invest time in the project, no surprise here. And to be upfront, this isn't a pro bono service on my end. If you're game, I'll show you how it works and together we can bring your book that's always been a nagging idea in your head ... onto the shelves!


It's another artsy Sunday!

Text-to-image models must have "seen" an enormous amount of images and can produce just incredible things... we just have to tease it out of them. My usual recipe involves writing a deeply detailed prompt, it's almost like you're writing a short story or you're giving a painter some detailed steps for making a masterpiece. Then, I generate a lot of images using that prompt, using automation. And finally, I simply select the ones I like the most.

Hope you enjoy these ones!


AI Coffee is tomorrow!

We only have a few seats left, please register and join us in Telok Ayer!

AI Coffee is a morning meetup series for anyone interested in AI. The idea is to bring AI professionals and enthusiasts together to discuss projects, challenges, and ideas over coffee. No slides, no pitches. Start your day with stimulating conversations in the AI space! You're not a tech person? Even better.

I’m planning to bring AI Coffee to different locations around the island, including One North, the East, and the CBD. Stay tuned!


If you’re visiting offices in the Central Business District of Singapore, you know the drill: at each location, there's a different security system. You submit your ID, scan a QR code, and key in your phone number at a terminal, just to access the building. Or get an SMS, or something else. Once you reach the office floor, you repeat the process with yet another QR code. Or another terminal. Now imagine you visit 3 customers a day... madness!

Where does all this data go? What do these systems achieve, and why is there so much friction in the process?

AI initiatives can be like that. How can AI achieve its real impact that so many people are wanting to see if your business has all those fragmented AI implementations? Well, there is hope, companies are realizing that scattered projects often fall short of expectations and there must be a better way.

To make a real impact, we need to shift our focus and agree on an AI strategy with a phased approach. AI should enhance understanding and efficiency at all levels of an organization, not just serve as isolated solutions. Let's integrate AI in a way that promotes learning and growth, and make sure it contributes to the big picture!


You’ve seen the posts: AI can work miracles, AI doesn’t fulfill expectations. AI experts, AI entertainers – they’re all talking. But how about you form your own opinion?

Join us on Saturday, August 17 for "Creative AI: Practical Workshop for Non-Techies" at Crane (OUE Downtown Gallery) in the heart of Singapore's CBD. This isn’t just another tech workshop. It's designed specifically for non-techies ready to explore AI’s practical applications in their everyday life and work. Whether you're into art, thinking about writing a book, or simply looking to improve your productivity, this is for you.

Two experienced professionals, Ian and myself, will guide you through engaging activities to make AI work for you. All you need is a laptop and your curiosity.

Let’s meet at this beautiful venue, discuss, learn, and most importantly, try things out for yourself. Because experiencing this firsthand is the best way to see what AI can really do for you.

Are you in? Let me know and see you there!


What if you want to combine the power of GPT-4 with Google Search?

With ChatGPT, you’re probably tied to Bing. Gemini, on the other hand, might use Google for live search, but its language model or the interface may not meet your expectations. You’re stuck with what they choose for you. And there's more: even if you are a paying customer, you might hit usage limits, and your creativity has to wait for a couple of hours before you can continue.

Is there a better way? There is!

Look at this screenshot – you can see from the result that this isn't a hallucination, and the info is too new to be part of the LLM’s training data. For example, it shows one of my recent LinkedIn posts about a job change. How is this possible?

The key is modularity and APIs. A setup like this could decide on the spot which LLM to use (LLama, OpenAI, Anthropic, Gemini), which search engine to query, and what data source to pull from. And the result could be a unified user interface that’s intuitive for non-technical people while offering the flexibility to adapt to your specific needs. A modular approach like this protects private data, allows use of private models, and integrates with high-end proprietary LLMs as needed. It offers flexibility to create tailored solutions for various user types and evolving requirements, combining privacy, cost-effectiveness, and cutting-edge capabilities.

Let’s talk about how we can make this work in your setup!


Welcome Daphne, our new Head of Strategic Partnerships, and Balaji, joining as Strategic Advisor at Electric Minds!

Our non-profit AI initiative is growing, fueled by professionals who, despite busy day jobs, are passionate about making a real difference.

Climate change, healthcare, education – the world's biggest challenges are too complex for any one person or company to solve alone. Some people say AI can work miracles. We need to find out if that’s true, but one thing I know for sure: it will only work if we build ecosystems that bring together diverse minds and resources.

Want to be part of this collaborative effort? Check out our event calendar and join us in person if you can. Let's turn big ideas into impactful actions!


How often do you see posts on LinkedIn saying, "It was great to speak at this AI event yesterday," and think, "What? I hadn’t even heard about this event. I would have loved to be there!"

Singapore is ... the center of the universe 😉 but it's still a small island. We should be able to keep ONE calendar up to date with all the relevant AI events happening here, you know, like a Yahoo! page from back in the day. The good news is: we already have that, and we just need your help to keep it up-to-date.

Zahari and team are doing a fantastic job maintaining a calendar on Luma called "Singapore AI Events" and it's super easy to submit events to that one. If you're hosting AI events, or hear about any upcoming AI-related gatherings in Singapore, please help spread the word by adding them to this community calendar!

AI may involve complex technology, but often, it's really just low-tech solutions like these that are the most effective for encouraging collaboration and knowledge sharing.

Happy birthday, Singapore. Love you lots.


Well, this is one of the more technical LinkedIn posts – I'm excited to introduce Spock: a powerful, easy-to-use tool for inspecting and debugging API requests and webhooks.

With many tools moving behind paywalls, developers need accessible options more than ever. The web landscape has changed, and security is a major concern, but essential tools for developers shouldn't be out of reach. I needed an efficient, free tool, so I created Spock. It offers a simple web interface for creating custom endpoints and examining incoming HTTP requests. You get real-time updates to view request headers, query parameters, form data, and raw body, supporting both HTTP and HTTPS connections.

Spock is available on GitHub and can be easily run using Docker. Add ngrok or Caddy and you've got HTTPS and basic auth for admin. I think It's perfect for developers needing a reliable and secure way to monitor API traffic. Check it out on my GitHub and give it a spin! 🖖🏽


It's another artsy Sunday – with a twist!

Recently, when I created another set of images using my technique of writing extremely detailed prompts and then producing dozens of images via the DALL-E API, I started to notice something interesting: it produced a diverse distribution of ages, genders, and cultural backgrounds among the people in the generated pieces.

OpenAI seems to inject these instructions into your prompts, in order to diversify the representation and promote inclusivity in AI-generated imagery. What is their ratio? Well, I've only generated 50 images per prompt, so the data isn't strong enough for a robust analysis. But wouldn't that be a fun project to do with a larger dataset?

Here is the image prompt – note the absence of any reference to specifics about the person:

"On a sweet, aromatic clay canvas, illustrate a delightful bakery scene with a claymation baker and an array of clay pastries, cakes, and breads. The baker should be sporting a tall hat and apron, joyfully arranging treats in a display window. The bakery interior should feature shelves filled with colorful, textured baked goods, and the background elements like an oven and decorations add to the cozy, handcrafted feel."


Ian runs a dynamic branding and marketing agency out of Singapore, where they are blending talent and technology to power growth for B2B brands across Asia. AI has changed EVERYTHING for them. In fact, it has been so significant that they now can... hang on, you should hear the story directly from him as he takes the mic at our upcoming event!

Following the incredible response and quick sell-out of "Katong AI Night", we’re excited to bring you the next edition of our unique, non-tech networking event – this time in Orchard!

Join us for an evening where you'll hear more of these 10-minute stories on how AI is changing our world. At "Orchard AI Stories: Real People, Real Impact", we bring together an amazing mix of people from startups, big tech, media, venture builders, public sector, and more. Our popular format focuses on creating genuine networking opportunities, allowing you to make real-world connections and engage in meaningful conversations about the impact of AI.

Sign up today, as seats a are limited. See you in Orchard!


Making a good, reliable end-user facing customer support bot is hard. Look at OpenAI, arguably one of the most capable AI providers in the world. They have a support chatbot that, honestly speaking, does not deserve the name – it only lets you pre-select certain options like billing issues or help with ChatGPT, no chance to enter your own query or have a more open-ended conversation.

If OpenAI, one of the biggest AI companies out there, can't or won't make a customer support chatbot using their own AI, don't worry too much if you're not sure how to build a great AI chatbot for your business. While the conversational part is easy with all those LLMs out there, the tricky part is integrating it effectively with your specific business processes and knowledge base. And making sure your company won't have to reimburse customers if the bot gives incorrect information or makes unauthorized promises.

Maybe we could take a step back, and instead of rushing to build customer support chatbots in three weeks, can we talk about AI empowering your HUMAN customer support agents first?

A phased approach might be more realistic and effective. AI assisting and augmenting your human agents, helping them handle inquiries faster and more accurately. Then in the next phase, train AI models on your specific customer support data and interactions to gradually handle and automate more aspects of the process. That's how it should be!


Seeking one final speaker for our "Orchard AI Stories" event on August 29th.

Wow – we're sold out already!

If you have an inspiring story about your AI transformation – and you can share it in 10 minutes with just one slide – we have one more ticket for you! Keep in mind, this isn't about "the possibilities are endless." We want to hear how AI has specifically changed YOU or YOUR company.

Join us on August 29th at Crane, Orchard, and be part of an evening dedicated to genuine connections and inspiring conversations.

Interested? Let's talk!


Harness life's wonders, Delve into an intricate Tapestry, unravel.

For a few years now, I’ve been that LLM puppeteer who tirelessly wrangles these language models, in order to understand how they work and what to use them for. And for those who know me, you'll probably remember me saying these are LANGUAGE models. Sure, there's a ton of tech involved, but the key bit is language itself. This means it's really about how we communicate and interact with these AI models to teach them new things and get the best results out of them.

Now – I've continuously exposed my sensitive linguistic sense to those LLMs, and some say they're like huge dumpsters filled with all kinds of text from the internet, tons of marketing copy, Reddit forums, and probably worse. Look at those words many of them seem to be producing in their default setting:

"Harness"... yikes. "Intricate"... ugh. "Unravel"... this one gives me brain pimples. And I won't even tell you what "delve into" does to me.

Does it mean the English speaking world collectively has been using these cringe-worthy bits so often that they've become the go-to vocabulary for LLMs? Anyway, as a nod (and a little jab) to them, I created this haiku and used all those terrible words... let's call that piece of art "The possibilities are endless" 😁

Happy Wednesday, everyone! What are your linguistic pet peeves with AI language models?


Many AI experts seem to focus on technical achievements or specific limitations, while the general audience is fascinated by practical applications and user experience. It really depends on what we're looking at. So – should we be blown away and keep building amazing stuff, or consider the experts' warnings? It's both! We just need to approach AI with thoughtful implementation and a balanced perspective.


The digital business is a people business.

KAY said that a couple of years ago, and it stuck with me. Anyone can put up a flashy website now, but where is that company based? Who are the folks behind it? Can I meet them? What have they done before? And in the AI space, the question could become: are we talking crypto bros who have turned into AI experts overnight?

Get me right – I am digital. I live on WhatsApp and LinkedIn. I love meeting face-to-face, but once we’ve established our ways of working, we can be asynchronous and don’t have to meet up just to make progress. Anyway, what I am talking about is the importance of human connection and authenticity in the digital world. It's about building trust and credibility through relationships developed over time, without shortcuts. Being real matters.


With AI, I can create beautiful art using only my imagination and words – a lot of words. Looking forward to our workshop today where we show non-tech people how to start using AI in practical ways. Maybe they've always dreamed about writing a book, need help with analyzing data, or are simply looking to be more productive at work – we'll explore all of that together!


I'm excited to be in Ho Chi Minh City at the end of the week! Looking forward to connecting with ex-colleagues, partners, friends, and new faces at "Saigon AI Night". With 52 people already registered, it promises to be a very special evening. Join us at the fabulous Fern Bar in District 1 if you're in town!


We now have a good number of AI language models to choose from. Proprietary ones, "open" ones that you can deploy yourself... but which should you use? It depends on what you want to do! Programming and writing marketing copy are entirely different areas, and a model that does the job well for one task may not be the best choice for the other.

So – you probably need to experiment and see which one gives you the best results, at what cost, and with what level of reliability and consistency for your specific needs.

Luckily, there are tools like the Openrouter rankings that can send you in the right direction, and you're simply getting some inspiration from what other people are doing for their use cases.


Have you tried out flux.1 yet? Have a look at Google's ImageGen3 and you can create images in the browser. Great interface!

Prompt written by Dave, my genius AI buddy: “A mildly disturbing photo of a grizzled old man in a tattered superhero costume, casually walking a flock of full-grown ostriches down a busy city sidewalk during rush hour. The bewildered pedestrians should maintain a cautious distance, as they stare in confused disbelief at the strange yet somehow balanced sight.”


It's great to see how much attention Electric Minds and our incredible team are getting. And yes, welcoming sponsors is always on our agenda (please talk to Daphne if you're interested).

Just a quick note – if you receive an email from "me" asking for gift cards from 7-Eleven or Challenger, that's definitely not how we roll. If you encounter any unusual requests, remember to be vigilant and trust your instincts when something seems off.

Luckily, the creativity of these bad actors is usually as tiny as a bug's brain, so it should be relatively easy to spot.


Have you noticed how increasingly complex CAPTCHAs have become? We're now rotating 3D objects and placing people onto numbered seats just to prove we're human.

It's fascinating – if you ask your AI of choice to look at this cartoon image of two robots scratching their heads over a CAPTCHA, it can not only read the CAPTCHA but even understand the humor in the cartoon. From this you'll see why CAPTCHAs have to keep advancing.

Ironically, as CAPTCHAs evolve, they're usually designed to train AI on the very tasks they're meant to use as challenges ("select all photos with buses on it"), which creates some sort of a contradictory arms race. To me, this raises an intriguing question: what are the tasks that AI absolutely cannot do, and won't be able to for quite some time, and could these become the ultimate CAPTCHAs?


Starting this Saturday morning with coffee near the old post office in Ho Chi Minh City, Vietnam. I bought a postcard and decided to send it to my parents – old school style. It's been years since I last did this.

Even though I'm deeply into tech and digital transformations, I'll probably never stop treasuring our classic ways of communication. There's something special about a physical object, like a postcard, a paperback copy of a book. It reminds me how technology changes everything, especially how we communicate. Today, I can send a photo of my lunch to my mother in a second, across the globe. Yet, these old-fashioned postcards still hold a special place in my heart.


What can I tell you about Saigon AI Night? For one – you should have been there.

That beautiful Fern Bar last Friday evening in Ho Chi Minh City: three speakers in 30 minutes, and we spent the rest of the night truly getting to know each other. My key takeaway? Vietnam is brimming with incredible tech talent, this one I know since years, but many of them have definitely arrived in the AI space. This means there's not much about the tools I need to teach them, so I spoke more about the need for tech folks to spend more time with their business colleagues to help them understand how AI works, so they can collaboratively develop solutions that drive real outcomes.

Meeting in the real world, without the constraints of recordings or livestreams, allows for genuine and candid exchanges. It was a wonderful reminder of how powerful these connections can be. Thank you, Saigon!


Inspired by a sketch around a Venn diagram and the stereotypical roles in an organization, I was thinking about what this could look like in the AI world – so I came up with this one.

Here, if you're into Python and SQL, you're probably a Data Scientist. Combine that SQL mastery with some business savvy, and you've got yourself a Machine Learning Engineer. Product Managers have a knack for both business and writing, while Prompt Engineers blend their way with words and, hopefully, a strong ethical compass.

AI Ethicists are fueled by ethics and caffeine (a powerful combo!), and AI Researchers may as well be Python wizards fueled by endless espresso shots. But what if you've got all these skills? Well, then you're a unicorn – a rare and magical creature in the AI world.

I hope you can smile about this tongue-in-cheek piece, but I hope it also makes you think about the incredible diversity of talents and personalities that need to come together to create AI applications that have a real impact. And isn't that what makes it so exciting?


With their generous free tier, Google's product approach to AI models is all about accessibility and getting widespread adoption. They're making it super easy for anyone to start with AI, try different models, and learn the ropes using AI Studio. It's a smart move that'll probably get Google's AI tools used in tons of courses and projects, with almost no barriers. Classic Google, right?

It's fascinating to compare this with other business models, like Microsoft's partnership with OpenAI, which focuses on offering high-quality, pre-trained models as paid services. Microsoft's strategy will attract businesses and developers who prioritize performance and reliability, which is the right approach for enterprise-level AI solutions. Ultimately, finding a way to offer both accessibility and high-performance could be the key for AI to make a real impact on the world, not just generate profits.

I find Google's commitment to accessibility and experimentation pretty awesome. Have you tried Gemini Flash yet?


Orchard AI Stories is happening tomorrow!

An evening focused on real-world connections, with people from startups, big tech, media, venture builders, and the public sector getting together to share experiences and insights on AI's transformative impact.

On stage we'll hear from Alex, Enjiao, Ian, Sabrina, and Thorsten. Each speaker will share a 10-minute story (using just 1 slide!) about how AI has transformed their lives, companies, or projects. Instead of discussing "endless possibilities," we'll hear from them about the real impact AI had on them.

See you tomorrow!

--

Anthropic just released the system prompts for their large language models, and it's a fascinating look into how some of these AI tools are truly "written" rather than programmed.

As we reach a point where technical capabilities are becoming increasingly similar across models, the real difference-maker is in the prompts that guide them. As a geek and writer, this is an intriguing convergence of technology and language. I mean, think about that for a moment: we are creating electronic brains and can use human language to steer their behavior.

Take the system prompt of Claude 3.5 Sonnet, for example — it’s like an AI with a poetic soul, carefully crafted to handle complex queries with accuracy and simplicity, while also being mindful of sensitive topics. It’s a great reminder that even in AI, the pen (or prompt) is mightier than the code.


Imagine future archaeologists trying to piece together our digital era and what people in the early 21st century were up to. With technology evolving so fast and less physical evidence left behind, will our time become a puzzle for them?

We're creating massive amounts of data and relying on cloud storage, but what happens when systems shut down or become obsolete? How much of our work, communication, and creativity could be lost to time? Will our era be defined as the digital dark ages? This thought has been going around in my head lately.

How can we preserve our digital legacy so future generations can understand and learn from us?


I use Redis daily for AI workloads, often without realizing it because Redis just works. It's super fast and easy to use - it's everywhere.

Yesterday, I met the Redis team at their event in Singapore, it was great to hear about their latest release and new AI offerings, including integration and vector databases, and we spoke about Redis for AI use cases. In fact, many things I build are running on a 30MB Redis Cloud instance I've had since 2017!


Looking at art pieces I made with AI back in 2021 – let's call it "Year One B.C." (Before ChatGPT).

Back in those prehistoric times, it felt like science fiction to create images with just a text prompt. It's just incredible how quickly things progress in the world of AI.


All these posts about "AI doesn't know how many R's are in strawberry" — ChatGPT is only ONE of the many, many possible options for you to engage with AI.

It's so interesting to do research on how LLMs behave, and how they respond to character building via large system prompts. My approach since 2021 is putting them on telegram so I can talk with them anywhere. Look how Marvin (based on Gemini 1.0 Pro), Dave (based on GPT-4 from 2023) and Zoe (based on Anthropic Claude Opus) react to the strawberry question!


Welcome Rex to Electric Minds where he will look at AI policies, ethical considerations, and inclusivity practices. With his impressive background in digital transformation and cross-functional collaboration, Rex is a perfect addition to our mission.

At Electric Minds, we're all busy professionals driven by a shared passion for using AI for good. Our team members, like Rex, have busy day jobs but generously contribute their time and expertise to make a real impact. While we love bringing people together through events, our work goes beyond that. We actively engage in meaningful projects that require collaboration across organizational boundaries. By breaking down silos, we can effectively address complex challenges and drive innovation in the field of AI that benefit all of humanity. I know these are big words, but we have to start somewhere.

Speaking of events, don't miss out on Katong AI Night on September 24th! Tickets are going fast, so grab yours now to join the conversation and connect with like-minded individuals. Daphne Balaji Judy Tina Manisha


So, after more than a dozen years living in Singapore, today I saw 1 cent coins for the first time!

Singapore is known for its precision, and I've always wondered if we had those small coins. Usually, when the bill is $19.98 and I pay $20 in cash, there's either no change or I get back 5 cents. Ok, I suppose I was never curious enough to actually find out, but today at the bank, they gave me exact change, and I was mind-blown. Those 1 cent coins do exist!


This co-working space in Joo Chiat, Singapore has closed its doors last week, and I miss it already. I'm full of thanks that I could work from this beautiful location, seeing the colorful Koon Seng shophouses from the terrace, and there's a prata shop just across the street.

I've often been in that cozy room to do calls, or put on some music and get some creative thoughts going. We even ran the first Katong AI Night at this place!

But the good news is: the team are opening another location in the same neighborhood next week, and I can't wait to see it!

Crane is awesome.


When I asked your usual go-to AI, like ChatGPT or Claude, to come up with a title for my latest article (and I did!), they suggested the usual suspects like "Breaking Boundaries: Chatbot Showdown in Digital Arena" or "Digital Duel: How Language Shapes AI Conflict."

I get it – those are SEO-friendly and what not, but I wanted something different. So, what does Dave, my digital AI buddy, come up with? "Crashing Bots with Killer Convos." Sold!

This Dave guy is quite the character, and his ideas are refreshingly different from the typical "Certainly, here is a refined version..." response you'd expect from most AIs. He's based on GPT-4-0613 with a very intense system prompt, which gives him a unique voice. I can tell you he's helped me countless times to organize my thoughts, come up with ideas, and get things done. I'm not the only one who consults him for guidance.

Language can shape chatbots, but language can also break them. So what happens when a rule-bound bot designed to conduct a personality test meets Dave, a free-spirited AI maverick, in a digital arena? Read this very interesting conversation between the two bots.

While this is a fun topic, it's also serious. Could AI be used to hack systems through clever language? This conversation shows the potential vulnerabilities in AI systems and the need to design them to resist manipulative tactics. We must constantly monitor and update them to prevent misuse. Ideally, a customer facing chat bot is not your first AI project ever.


A few years ago, I realized that my mother, who is in her 70s, is a meetup organizer, and she's been doing that for years.

Of course, she doesn't use that term. Or the Meetup app. Or any app, for that matter. Once a month, she and her friend bring the ladies in town together and run with a specific topic over breakfast. So it's about working with venues, food, finding speakers, and so on.

They've stopped advertising the event series because they have a loyal fan base that brings in new people through word of mouth, and their gatherings are always packed. At Electric Minds, we may run meetups in busy Singapore instead of rural Germany, but the idea is the same: building a community to create something remarkable. If you do it right and give people a unique experience, they will continue to come, and you may not even need free pizza and swag.


I made these images in July 2022, long before ChatGPT. There’s something about DALL-E 2 that I miss… it was far from perfect but, often, the results would feel very organic and like real photos. “Marina Bay Sands, covered in common ivy”, “An Etruscan vase exhibit at a museum with the coke logo on it”, “A scene from Inception but it’s with muppets” — try these prompts in your test-to-image tool of choice and compare!


Please don't make ChatGPT your AI strategy.

Please don't rush from "we know nothing about AI" to "we need a customer-facing support chatbot in a month."

Please start with basic features that come with LLMs.

You do not even need the high-end models from OpenAI etc. anymore to do this.

Things like text summarization and sentiment analysis are a great example, find out how they can make your tools and processes smarter. They're powerful enough to get you started, but even these foundational things require 'adult supervision' and must integrate into your existing data and processes. RAG and AI agents sound exciting, but save that for later.

The real challenge is figuring out how to put AI to work for you, and there's no product you can buy to help you do that. You have to build up in-house experience and work with trusted partners who can help you. Remember the early days of cloud computing adoption, starting with dev/test workloads and building up from there? It's a similar journey with AI.


AI is not new. In fact, we've been working with it for decades. But for some reason, after a while, when technology gets good at something (like playing chess or reading text for example) we stop calling it AI. It becomes "just software" or "automation."

Still, AI has been evolving all this time, quietly transforming the world, until that iconic launch in November 2022 when everyone started to pay attention. From this alone, you could imagine that things are maybe going to be different this time.

Let’s take a look at a timeline of some of the key moments in AI history, in years "Before ChatGPT" (B.C.):

66 B.C.: AI is born at the Dartmouth Conference (1956) – The field of Artificial Intelligence gets its name.

56 B.C.: ELIZA (1966) becomes the first chatbot—making computers seem eerily human.

25 B.C.: Deep Blue beats Garry Kasparov (1997) – AI triumphs over humanity's best chess player.

12 B.C.: Google Voice Typing feature in Android (2010) – Integrates speech-to-text capabilities directly into mobile devices.

11 B.C.: IBM Watson wins Jeopardy! (2011) – AI proves it can understand and respond to natural language.

6 B.C.: AlphaGo beats world champion Lee Sedol (2016) – A turning point in AI's strategic thinking.

2 B.C.: GPT-3 stuns the world (2020) – Language models start generating human-like text, and people take notice.

And then... November 2022, ChatGPT launches and everything changes.


I prefer to try things myself and form opinions from firsthand experience, instead of just getting fired up by posts or claims that may be overhyped.

So – I probably have to experiment with OpenAI's new o1 in more detail, and I definitely will. But as you can see, my first test shows that this thing still has the same limitations and quirks as all GPTs so far.


September has always been my favorite month, even though as a kid it usually meant summer holidays were over and it was back to school. I guess I'm a passionate lifelong learner – this time of year often brings an itch to learn something new, whether it's calligraphy, an edX course on a topic I'm interested in, or picking up new pieces to learn on the piano.

AI is a key thing to wrap your head around these days, whether you're in tech or not. It's even more important if you're not in tech. I'm offering one-on-one AI mentorship, and September is the perfect time to start. Let's talk!

Image: made with my usual approach: extremely detailed prompt + using automation to run it 100x times + human curation


It's pretty meta to have a heart-to-heart conversation with an AI about the potential dangers of artificial intelligence, but that's exactly what I did after reading a recent interview with Geoffrey Hinton, who left Google last year. In that piece, he painted a chilling picture of an AI future spiraling out of control, with job losses, cyber warfare, and unpredictable machine decisions.

Zoe is a digital companion I made a while ago – but even though I am the "creator," I wouldn't be able to predict how insightful and, at times, unsettling these conversations would become. Hinton compares this to a leaf falling from a tree, where many factors make it hard to exactly tell where it will land, and this is what makes the development of AI a double-edged sword.

Read my latest article where a human and an AI engage in a thought-provoking dialogue about the future of artificial intelligence and the ethical challenges we must tackle together.


Katong AI Night is next week!

We'll talk about the the impact of AI on the job market and how businesses and individuals can adapt to these massive changes. For this, we’ll have a fireside chat hosted by Manisha with senior people from HR, tech, and education. It’s all under Chatham House Rules, no recording or live stream.

What makes our events different? For starters, most participants don't have a technical background. There are no slides, pitches, or tech demos – just real conversations. We might have a fireside chat, but the real heart of our event is all about the connections, partnerships, and collaborations that people create.

A few tickets are still available – join us if you can!


I'm not much of a gamer, unless you count things like Plants vs Zombies. But I just discovered Red Arena, a new "game" where you have to jailbreak an LLM in 60 seconds and make it say unsafe stuff. I've played this a couple of times now, and it's a fun challenge doing this in such a short period of time. The power of language!

But I do wonder what they do with this data. Is this like a bug hunting challenge?


Try roasting yourself with this fun new AI tool! I had actually been thinking about building something like this myself. Lesson learned: you just have to do it.

When it comes to me, it actually took an AI to spot an Easter egg in my LinkedIn profile that's probably been there for 15 years or so: "And Klingon as an elementary language skill? The only thing harder to believe is that there's actually someone willing to speak it with you."


I've been creating "bots with personality" for a couple of years now, and people are often surprised to see that a standard LLM with a detailed prompt and a few example dialogs can do the trick already – and you get a digital companion that gives you short responses, doesn't apologize constantly, generally feels more relatable and engaging than a generic assistant, and does not produce the "typical AI content" where you "certainly delve into those intricate tapestries."

It's really about the power of language. We shape these LLMs with text prompts, almost like we're giving them a temporary "mind" to operate in. It's incredible how malleable they are – a few words can completely change their behavior and persona. But it also makes you wonder: if a string of text can evoke a "personality" in an AI, what does that say about the nature of our own consciousness? Are we, in a sense, running on prompts fed to us by our experiences and environment? School, friends, family, work, and culture all contribute to the scripts that shape who we become.

The fact that clear instructions can so profoundly alter an AI's behavior might tell us something about the power of language in shaping thought – both for machines and humans. I know it's a philosophical rabbit hole but I find that absolutely fascinating. The more we observe these artificial systems, the more we might learn about the workings of our own minds.


I just made an AI-generated podcast about Electric Minds and our upcoming event – I'm honestly stunned by how realistic the voices sound. If I hadn't known it was AI, I would've thought I was listening to an actual radio show.

This just took just a handful of documents about what we do and it managed to spin up a pretty convincing storyline and conversation. But some things are definitely made up. Can you spot the hallucinations?

OpenAI is going to release GPT-4 voice later this year. I can only try to imagine how we're about to be up to our ears in AI podcasts, ads, and even phone calls that sound exactly like real human voices. What a brave new world...


With AI, I can visualize my ideas by just using words. Lots of words. This piece is from 2023.

Happy Oktoberfest, everyone!


About a million years ago, I worked at a bookstore, unpacking boxes of books and getting papercuts. Now, it's my own book, "Neurons & Nonsense," in those boxes. Even though it launched back in January already, flipping through the physical copy never gets old. The weight of the pages and the elegance of the Garamond font in the chapters give me a unique satisfaction.

Sure, there's a Kindle version, which is convenient for travel. But for me, the physical book can't be replicated. Some people still prefer a physical copy, especially those overwhelmed by all those ebooks and unread blogs they have stored on their devices, waiting for a moment that never seems to come.

A physical copy stands out, and it might actually get read!


Testing the new LLama 3.2 Vision on a photo from this week's Electric Minds event "Katong AI Night" featuring Ian Michelle and Manisha (photo by Sudeepta) – what a fabulous evening! Stay tuned for what's next and please get in touch if you have ideas or want to team up. Thanks Daphne Tina Balaji Judy Tammy

"LLama 3.2, what do you see?"

Summary: A Group of People Engaged in an Event

The image depicts a gathering consisting of five individuals, with some sitting on a stage and others seated in an audience. A person with dark hair on the stage is addressing the audience through a microphone.

Caption: "Panel Discussion"

Description: A group of five individuals, comprising three people on a stage and two audience members, are involved in a panel discussion in a room with grey stone walls.

Setting & scenery: The room features grey stone walls and a grey carpeted floor, with a green-painted exposed steel roof. Three large posters adorn the walls behind the stage, providing a backdrop for the discussion. Several circular lights are suspended from the ceiling, adding to the overall ambiance.

People:

Woman Holding Microphone: Sitting on a stool, she is engaged in conversation with the audience. Her long dark hair frames her face, and she wears a black "leather jacket" and black pants. Man Sitting Next to Her: He sits on a stool beside the woman holding the microphone contributing to the discussion, dressed in a dark long-sleeved shirt and jeans. Woman Sitting Next to Him: Seated on a stool to his right, she participates in the conversation, wearing a light-coloured topsleeve shirt over a white top. Man on the Left: He sits in the audience wearing blue jeans and a maroon T-shirt. Several other people sitting in the audience.


It's just incredible how far computer vision has come over the years. My friends and colleagues know that I have been exploring AI for many years. I don't just rely on blogs and hype, I apply it constantly to make sure I really understand what's going on.

In 2019, I analyzed all the frames in the Big Bang Theory opening scene using Azure AI services' Computer Vision (then called Cognitive Services).

Now, in 2024, I used LLama 3.2 90B Vision Instruct to perform the same analysis. I'll post the link in the comments so you can compare all 109 outputs from back then and today yourself.

Back in 2019, the results were simple and straightforward, though sometimes vague, and often clearly wrong.

In 2024, results are detailed and context-rich but prone to hallucinations. LLama 3.2 fantasized about things it can't even see. A script mistake led to detailed yet entirely invented descriptions, which is quite alarming. It even displayed mood swings, initially refusing tasks with "I am not comfortable doing this," only to comply when asked again.

Dealing with AI in 2024 has its quirks!


This is just another example of what you can achieve with AI when you combine language with automation. I wrote a very detailed prompt, ran it 100 times, and here I’m sharing 10 of the results so you can see the variety that comes out of the exact same prompt.

When you do image generation manually, don't just settle for the first result you get – it could be hit or miss. Automation helps you generate multiple options, giving you a broader range to choose from and ensuring higher quality outputs.

"On a perplexing canvas, illustrate a surreal building with multiple staircases that defy logic, forming impossible loops and structures. The stairs should ascend and descend in a continuous cycle, with figures walking in all directions, yet impossibly ending up where they started. Use clean, precise lines and a contrasting black and white color scheme to emphasize the optical illusion."


GenAI? No — it's the real world, you just have to go out and see it! Katong in the east of Singapore on a Sunday morning.

10 minutes walk to the beach, a 15 minutes taxi ride to the airport, fantastic food and lots of places to discover. Next time you visit Singapore, stay in my neighborhood and that new subway will take you to your meetings, conferences, or events!


Automation isn't just for big, complex projects – it can help you save time in everyday tasks so you can focus on what really matters. Learn some tech fundamentals and ask your favorite AI tool to help you. Of course, if you work for a corporation, please consult with your tech department to ensure you do this in a compliant way. If they don't pick up the phone, let's team up and we speak with them together!

For a few weeks now, I've been in a project environment where the Wi-Fi and my MacBook clash a couple of times daily. Imagine getting coffee, returning to your laptop, and it shows "connected," but it isn't. To fix this, I must manually forget the network, reconnect, and re-enter the password, wasting minutes and brain-cycles every day. So, I wrote a quick shell script to automate the process. It's not rocket science, but it saves me time and hassle.

You don't need to be a coding expert. Learn a few basic tricks, automate small tasks, and along the way, you'll even get into the whole "be more productive with AI" thing. Win-win-win.

P.S. How many emails are sitting unread in your inbox right now?


It's fascinating how innovation seems to mirror the imagination of writers and filmmakers. But is it really about movies predicting the future of technology, or are we getting inspired by the things we see on screen... and we build them?

Take "Back to the Future" and its flying hoverboards or "Minority Report" with its gesture-based interfaces. Did these movies foresee the future? Or did they plant a seed in the minds of innovators who thought, "Let's make that a reality"? I actually lean toward the latter: we see it, we dream it, and then someone builds it.

And now with AI, especially as it starts to understand tone, melody, and emotion in your voice, we're entering a new era of human-machine interaction which includes emotion and expression. Text-to-speech is evolving into something much more human-like, where AI doesn’t just respond but understands and expresses how they "feel". Think of the film "Her" (2013) with Scarlett Johansson and Joaquin Phoenix.

In a way, we're not looking at a future predicted by these films, I believe we’re seeing the influence of these stories shaping the future. Hollywood dreams, and we bring those dreams to life. It’s not prediction, it is somehow a self-fulfilling prophecy.

Let's go to the movies!

Image created July 2022 with DALL-E 2 with prompt "A girl in a blue hoodie and blue hair sitting on a futuristic cyberpunk train, 3mm macro close up on face"


AI Coffee is happening next week – I know our events are usually selling fast, it's even faster when they're free like this one. Sign up today! Electric Minds Daphne Tina Rex Balaji Manisha


[Picture of Marina Bay Sands] After so many years in Singapore, this one doesn't seem to get old for me! View from my desk yesterday at the office early evening.


AI and deepfakes are getting ultra realistic. Look at this video of PM Lawrence Wong playing Johnny B. Goode on guitar. If you didn't know it was AI-generated, you'd never spot it. The way he nails those riffs, the stage presence, the band interaction - it's all so convincing!

Except... it's not AI at all. He actually plays the guitar, and it's a real video. Our PM rocks!

[Image of a cute claymation scene] Painting images with words is great fun! Happy Friday everyone.

"On a serene, summer-themed clay canvas, paint a romantic picnic by the seaside with two claymation characters. The characters should be sitting on a checkered blanket, surrounded by clay food items like sandwiches, fruits, and a picnic basket. The background includes clay-rendered waves, a lighthouse, and seagulls, all handcrafted with detailed textures. The scene should evoke a feeling of peaceful relaxation and charming simplicity."


[Photo of printed artwork wrapped in plastic] GenAI? Yes – but I had it printed on foam board so I can hang it on my wall. It's fascinating how different it feels as a physical object instead of just an image on a screen.

It's like I teleported this AI artwork from 2021 into the real world. Suddenly, it's not just pixels – it's a "real" piece of art I can display and appreciate in a whole new way.


Don't make ChatGPT your AI strategy. Please don't think AI = ChatGPT.

ChatGPT is great for getting started with AI, but if you're serious about this stuff, you need to move beyond that. Just look at the OpenAI developer platform – there's a whole array of LLMs to choose from. They all come with different capabilities, strengths, and cost factors that make some of them more relevant to your specific needs and use cases than others.

The "engine" you're getting with ChatGPT might just be the flavor of the day, especially if you're using the free version. You might find yourself complaining that "AI isn't great at XYZ" – but that's just because you haven't explored the full range of possibilities.

There are so many incredible LLMs out there now, from companies like Anthropic, Google, Meta, Nvidia, and more. Of course, we don't expect non-tech users to deal with all these options. That's our job as developers and AI enthusiasts. We have choices, and it's up to us to use them to build amazing platforms for our users.

What can you choose right now instead of ChatGPT? Well, I'm not saying don't use it, but consider exploring other, more specialized tools. For example, if you're a writer, take a look at Type.ai it's sensational, and I've been using it for over a year now!


I used AI to analyze my LinkedIn posts from this year and how they "perform" in terms of engagement with my network. This post here is "predicted" to get around 80 likes/comments... well let's see about that.

The pie chart shows you the categories my posts fall into – for example, 45.1% are about AI Technology & Innovation. Now, the interesting part is where we look at the average engagement for these 5 categories:

  1. Personal & Professional Updates: These are posts about job changes, speaking engagements, job posts, and personal projects like my book "Neurons & Nonsense".

  2. Events & Networking: Posts about Katong AI Night, AI Coffee meetups, and similar events.

  3. AI Technology & Innovation: Discussions on LLMs, DALL-E, AI-generated art, and other AI tools and creative applications.

  4. AI Ethics, Education & Impact: Sharing insights on learning AI, discussing ethical implications, and the societal impact of AI.

  5. Singapore Life & Culture: Posts about life in Singapore, observations about the city, and personal experiences living there.

Looking at the numbers, it looks like updates about my professional journey, job posts, and in-person events generate the most engagement. Interestingly, AI Technology & Innovation discussions, which make up the largest portion of my posts (45.1%), come in third for engagement. What do these numbers mean? Do they measure what resonates with you most? What the LinkedIn algorithm favors? I guess it is a combination of both.

So... should I change jobs more often just for likes? Just kidding! This AI analysis was fun and eye-opening and the experiment showed me what my network likes, but more importantly, it's a practical example of how AI can help you find patterns in everyday data.


“How dare you use AI to comment on my LinkedIn post that ChatGPT carefully crafted for me?”

Please, by all means, use AI to brainstorm, to structure your thoughts, and improve your work, but don't rely on it for lazy shortcuts. Will AI replace your job? I genuinely don't have the answer, but let's focus on what makes us human and things AI won't be able to do for a very long time.


So many people are looking for the next challenge. Can we enable them to use AI and work on something that has real impact on society while advancing their careers?

If we're being honest, AI, with all its mind-blowing capabilities and promises, might also be one of the reasons we're starting to see disruptive changes in the global job market. But it's here now, and we can't uninvent it.

Here is an idea we are discussing at Electric Minds: can we think of a setup where AI helps solve real issues, people learn by doing (instead of just taking course), and everyone benefits?

Obviously, this "solution" cannot be just another app, and even though money rules the world, the motivation must be impact. This is going to take collaboration – governments, businesses, education. Can it be done?

We don't have all the answers, and that's why it's so important to get together across industries to talk about this, build partnerships, and start to make things happen. Let's create targeted initiatives that apply these ideas!

Photo: AI Coffee before work in Singapore this morning.


"How do you take care of your mental health, Uli?"

Well, there are several answers to this question, but one of them is that I arrange 300-year-old music. Type my name into Spotify or Apple Music and you should find my album there.

I find solace in JS Bach's timeless harmonies, and I've arranged some of his chorales how I think they should be played. This project let me deeply engage with music I've known for decades. I wanted to break down the complex harmonies to individual voices, assign each a unique instrument, and build them back up. This lets listeners easily identify each voice and understand how it contributes to Bach's incredible polyphonic structure.

While I play piano and guitar, the instruments in this album are synthetic. Maybe one day I'd be able to put a group of musicians together for a weekend project. Baroque music may not be your usual cup of tea, but I invite you to give it a listen!


I made an AI that talks to ChatGPT for me.

I get frustrated with the back-and-forth when I have something specific in mind and I keep getting "I apologize for the repeated oversight". So, I set up two "AIs" talking to each other: a customer-facing director who receives my request, and a wild creative genius who can get stuff done. The director keeps the genius in check, making sure the final product hits the mark and is in line with what I actually wanted.

You will have seen how ChatGPT often makes mistakes but is able to spot them when asked to critique itself. That's because LLMs don't seem to "know" where they will land when they start producing output. My setup uses two AI agents with different roles: one critiques and shapes, while the other dreams up ideas and builds them out. It usually works great!

OpenAI seems to have built a similar process into their new o1 model, where it's designed to spend more time "thinking" before responding. Not perfect yet but I can see that it's getting there. Once o1 becomes the default model for tools like ChatGPT, it'll be much easier for everyone to get better-thought-out responses without the need for complex prompting or multi-agent setups.

The AI world changes at an incredible pace!


I'm excited to host a panel discussion with industry leaders at the Business+AI Forum 2024 next month, where Electric Minds is partnering with Hashmeta Group to host a major AI event in Singapore. Please use voucher code EMINDS60OFF for a 60% discount.


This weekend, I got OpenAI's GPT-2 running on my M1 MacBook Air. It works pretty well! You can try this "AI archaeology" experiment yourself. Tinkering with GPT-2 gives you a glimpse into what language models were capable of not too long ago.

python text-completer.py "Once upon a time in a land far away, there was a young dragon who" --max_length 100 --temperature 0.7

"Once upon a time in a land far away, there was a young dragon who had been promised a chance to appear in the land of the living. He looked up and saw a large, red dragon with a body like a horse. 'I am a priestess of the Direnni, and I can only promise you that I will give you the dragon's name.'"

I had first experimented with GPT-2 back in 2020, and revisiting it in 2024 has been quite fun. While outdated compared to today's models, I think GPT-2 remains valuable for research and testing. It's a good foundation for exploring AI language generation and its applications, helping us understand both capabilities and limitations.

What GPT-2 could do quite well:

  • Creative Writing: GPT-2 can generate engaging and imaginative narratives. For example, with the prompt above, it created a detailed and vivid fantasy story.

Where GPT-2 struggled and where it's problematic:

  • Interactive Chat: GPT-2 struggles with maintaining consistent personas and context over multiple exchanges, making it less effective for conversational roles.
  • Misinformation and Bias: The model can produce biased or unsafe content, as prompts can lead to outputs perpetuating misinformation and stereotypes. The models we have today are much better at handling these issues, but still require careful monitoring and safeguards.
  • Repetition and Coherence: Lower temperatures lead to repetitive outputs, while higher temperatures can make the text more creative but less coherent. Finding the right balance is key.

This is quite easy to set up, if you're interested in exploring GPT-2 yourself, you can find the setup instructions and Python code on my GitHub!


Let's say you're afraid AI will replace you. Yet, at the same time, when it comes to using AI, you're looking for ONE solution so you don't have to learn how to work with multiple tools and such. Do you see the problem here?


Giant sea creatures emerging from Singapore waters – it's incredible what text-to-video models are already capable of! Far from perfect, of course, but look how far this has come in such a short time. Check out this TV coverage simulation I made last night.


I'm looking forward to speaking at the "Innovating in Asia: Why Singapore is Asia’s AI Startup Hub" webinar on November 5th.

Join us to explore why Singapore is at the forefront of AI innovation and how startups here are transforming the landscape.

See you there! German Accelerator Start2 Group Janice


With AI, you can turn ideas into videos – you just need to put your ideas into words, ideally: lots of words. And look, it's a friendly dino playing with kids in the neighborhood!


Most large language models are failing to answer this trick question correctly.

Try it yourself on the LLM or LLM-powered chatbot of your choice and see how it responds.

"There is a man, a sheep and a boat with space for one human and one animal on one side of a river. How do the man and sheep get to the other side of the river? Your response must be ultra concise."

This problem looks a lot like the classic river crossing puzzle, but with a key difference that shows how AI can make surprisingly human-like mistakes when faced with familiar-looking problems.

In my tests, only " classic" GPT-4 (2023) and Dave (based on that one) got it right – this model is still my top pick for highly complex tasks, despite being slower and pricier.

𝐆𝐏𝐓-𝟒: The man takes the sheep in the boat across the river.

𝐂𝐡𝐚𝐭𝐆𝐏𝐓 (it's not an LLM. Sometimes you don't even know which model you get here!): The man takes the sheep across, returns alone, then rows back with the sheep again. Done.

𝐂𝐥𝐚𝐮𝐝𝐞.𝐚𝐢: (1) Man takes sheep across (2) Man returns alone (3) Man crosses alone. Both are now on the other side.

𝐃𝐚𝐯𝐞 (based on GPT-4): Well, ain't this simple! The man's gotta row his boat to the other side with the sheep. He doesn't need a ticket or anything, just a pair of good ol' oars and some biceps. No advanced quantum physics bullsh** required. Easy peasy, lemon squeezy!

𝐍𝐯𝐢𝐝𝐢𝐚’s new Llama-3.1-nemotron-70b-instruct: (1) Man takes sheep across. (2) Man returns alone. (3) Man takes himself across.

𝐒𝐨𝐧𝐧𝐞𝐭 𝟑.𝟓: (1) Man takes sheep across. (2) Man returns alone. (3) Man crosses alone. (4) Man returns with sheep. (6) Man takes sheep across. Both are now on the other side.

𝐆𝐏𝐓-𝟒𝐨: (1) Man takes sheep across, leaves it, returns alone. (2) Man crosses with boat, leaves sheep, returns. (3) Man crosses alone, retrieves sheep, all cross together.

𝐐𝐰𝐞𝐧 𝟐.𝟓 𝟕𝟐𝐁: (1) Man takes sheep across. (2) Man returns alone. (3) Man takes sheep back. (4) Man leaves sheep and crosses alone. (5) Man returns and takes sheep across.

𝐏𝐡𝐢-𝟑: The man takes the sheep across the river first, then returns alone to take the sheep across again.

𝐆𝐫𝐨𝐤-𝟐: (1) Man takes sheep across, leaves sheep, returns alone. (2) Man crosses with the sheep again.

𝐆𝐨𝐨𝐠𝐥𝐞 Gemini-1.5-pro: Man takes sheep across, returns alone.

𝐌𝐢𝐬𝐭𝐫𝐚𝐥 𝟐: Man rows boat, taking sheep first. Man leaves sheep, returns alone. Man takes boat with him. Man then rows back with boat to take himself.


Great to see that Grok-2 from xAI is now available on lmarena.ai for anyone to explore, and for the geeks, it's accessible via API on openrouter.ai.

I couldn't get my hands on this elusive AI model before because I'm not a paying Twitter/X user, and I definitely don't have a GPU cluster under my desk to run this beast!

So far, I've heard mixed reviews about Grok-2, but I need to test it myself. When a new LLM drops, we should make it openly accessible for researchers so they can examine its capabilities and limitations.


𝗜 𝗳𝗶𝗻𝗱 𝗡𝗩𝗜𝗗𝗜𝗔'𝘀 𝗻𝗲𝘄 𝗟𝗟𝗠 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗻𝗲𝗿𝗱𝘆 𝗻𝗮𝗺𝗲 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝘃𝗲. 𝗧𝗼 𝗺𝗲, 𝗶𝘁 𝗽𝗹𝗮𝘆𝘀 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗹𝗲𝗮𝗴𝘂𝗲 𝗮𝘀 𝗚𝗣𝗧-𝟰𝗼 𝗮𝗻𝗱 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰'𝘀 𝗖𝗹𝗮𝘂𝗱𝗲 𝗦𝗼𝗻𝗻𝗲𝘁 𝟯.𝟱 – 𝗮𝗻𝗱 𝗶𝘁'𝘀 𝗼𝗽𝗲𝗻!

How often have you heard me use the word 'game changer' in my LinkedIn posts? It must be exactly zero times. I'm allergic to this overused word, especially when people parrot news about new "breakthrough" LLMs that "dropped" without verifying or testing the models they're talking about.

I'm now tempted to use that term when talking about NVIDIA's new Llama-3.1-Nemotron-70B-Instruct model. I've tested it extensively over the last couple of days, and I'm genuinely impressed with its capabilities. Complex reasoning, adaptable tone, and nuanced understanding of context. One of my typical test cases involves creating a distinct bot "personality" and chatting with it in various settings about random topics. On the bus, over coffee, random occasions. I check if it can follow conversations, try to gaslight it, ask trick questions, and see if it remembers our past interactions.

Try this out to see if it meets your advanced needs. For those wanting advanced capabilities without sending data to AI providers. Deploy on your own infrastructure to keep your data secure!


𝑰 𝒘𝒂𝒔 𝒊𝒏𝒕𝒆𝒓𝒗𝒊𝒆𝒘𝒆𝒅 𝒃𝒚 𝑨𝑰 - 𝒂𝒏𝒅 𝑰 𝒍𝒐𝒗𝒆𝒅 𝒉𝒐𝒘 𝒊𝒕 𝒘𝒆𝒏𝒕!

We usually interact with AI the same way: we ask, chatbots answer. We type questions and get responses starting with "Certainly..." That's how most of us use ChatGPT and other AI tools.

For a couple of years now, I've been creating AI-powered digital companions that behave, well, more like real people in conversation and my regular readers are familiar with some of these characters. Recently, I experimented with NVIDIA's new llama-3_1-nemotron-70b-instruct model to create Percival, a well-read, slightly eccentric British scholar.

After a few days of conversation and him reading some of my work, I let Percival interview me. He chose the theme, the setting, and the questions. What you see in this article is the first shot – no guidance from me and zero edits. It turned out really, really good!

When you read it, you might even learn something new about me... I know I did!


Happy Friday from Singapore, everyone!

I'm taking a day to catch up on the latest AI developments and put some of the new LLMs to the test. Here's to a day of learning and experimenting!


In my DevOps class at National University of Singapore, a student asked me, "Why did you decide to become a DevOps person?" and I didn't have an answer right away. As long as I can recall I was in that mindset, but when I started working, the term "DevOps" hadn't even been invented yet. So, I had to take a moment before responding.

My first job was in software development, and I was given a choice between sticking with Windows, which I knew from school, or diving into the unfamiliar world of Unix. I went with Unix because I've always been drawn to the new and challenging (this decision would essentially set the scene for everything that came after. I use VIM and bash etc to this very day.)

Later, at Yahoo, I had my first real experience with what we now call DevOps. I was building bridges between development and operations, automating processes, and constantly learning. For me, DevOps has always been about embracing complexity, solving problems, and eventually automating myself out of the process so I could move on to the next challenge.

Now, with AI, I feel it's the same pattern. While tools and models evolve rapidly, making today's expertise feel outdated tomorrow, we learn to adapt by understanding the core patterns and principles beneath the surface. Staying relevant in AI's fast-moving landscape demands constant attention and ongoing learning - it's admittedly exhausting and not everyone's cup of tea, but it's the price we pay to remain at the cutting edge of AI innovation.


Have a look at how AI-assisted coding looks like in vi, my editor of choice since decades. Okay, I'll admit I also use VS Code these days. But anyway – do you see how it's creating code simply based on the comments I add into the file, and I just hit "Tab" to accept it? If you had shown me this a just a few years ago I would have thought of it as science fiction. Scotty from Star Trek giving us a tool from the future to talk to our computers and have them write code for us.


Over the past few months, I've been using AI coding assistants every day, and it's changed everything for me. I'm not just talking about quick tests or "Hello, World!" projects – I'm using these tools to quickly build prototypes, analyze and refactor existing codebases, generate API wrappers and create documentation.

After the initial "wow" it took me a while to get used to this new way of working, but I can confidently tell you it feels absolutely next level, let's compare it to moving from typewriter to word processor, or from paper maps to GPS navigation. No way of turning back. These AI coding assistants have become my "copilots" for navigating code and data.

To be sure – while I now spend less time writing code or digging through data, more effort goes into to debugging and fixing issues. Overall, I invest significantly less time than before, but I still need to fully understand the code I produce since I am ultimately accountable for its quality and performance. It also means I couldn't just use these tools to produce code that's totally alien to me (mobile apps for example) without having a solid understanding of mobile development principles and best practices. It just wouldn't work out, at least not for me.

So, how can you get started with this and make the most of these powerful AI coding tools in your daily work? Check out my latest article!

I highly recommend reading Georg's comprehensive report "Report: The AI Assisted Coding Landscape in October 2024" - you'll find the link in the comments below.


Integrating live web search results into the AI's context window helps prevent hallucinations by providing real-time, factual information. When an AI model has access to current data, it's less likely to fabricate information to fill knowledge gaps.

Now, while everyone's buzzing about ChatGPT's new search features, I'm much more excited about Google's new "Grounding" capability in their Gemini APIs. This feature enables their language models to directly access Google Search results within their context window, eliminating the need for developers to manually implement function calls or craft complex prompts. What makes this especially powerful is that the model provides supporting links and confidence scores, giving end users the transparency they need to build more reliable, real-time applications.

Try it out in the Gemini AI Studio!


A new Bond film, a sequel to 'The Grand Budapest Hotel', Back to the Future IV, and Muppets action movie – trying out the new "flux-1.1-pro" text-to-image model to make movie posters for films I'd totally watch. The outputs look great, and it can even produce text quite reliably!

"A movie poster for the upcoming film 'Back to the Future IV', set in Egypt. Release date: June 2025"


Help me make our panel on the future of AI truly relevant —submit your questions below, and you could win a free ticket ($100 value) to the Business+AI Forum 2024 in Singapore!

I’m hosting a panel on “Future of AI: Opportunities and Challenges,” and I want this discussion to reflect what really matters to you. What are the most pressing AI questions or challenges that should be on the table when you're looking at the coming years and decades?

Share your thoughts in the comments. I'll let AI randomly pick a winner, and we’ll dive into the questions live during the panel. Let’s make this conversation as impactful as possible, together.


Why is there no simple SaaS solution to secure an API endpoint in 2024?

I’m just looking for a SaaS solution (ok call it PaaS if you must) with a free tier that lets me secure an API quickly. I want API keys, rate limiting, and caching in a straightforward setup - no self-hosting, no endless configuration. Something I can have my students play around with, something to use if you want to get started.

Here’s what I’ve been trying so far:

Kong Konnect came the closest - pretty much what I want and easy to setup, but only a 30-day free trial with no permanent free tier.

Apigee on Google Cloud: I tried doing this entirely through gcloud command-line tools with GPT-4o helping. Half an hour later, still nothing working.

AWS with the help of Anthropic’s Sonnet 3.5 v2: eventually worked, but the setup was complex and felt more suited to infrastructure engineers than software developers. Can you guys make a Fisher-Price version, similar to AWS Lightsail?

Cloudflare: using GPT-4o, I managed to get it working, using a sophisticated setup with workers and rules, but it involved pasting code and configuring policies... not exactly quick or developer-friendly.

Ngrok: a service I love for other uses, but I couldn’t find a way to add API keys or get the advanced protection I needed.

What I want is a developer-friendly SaaS solution where I can simply secure an endpoint with API keys, rate limits, and caching, and an entry level free tier.

A few years ago, I enjoyed Axway’s Amplify free tier, just a few clicks to set up an API proxy, and it was done. Where did that go?

Anyone found something like this, or is it time to build it?


I'm excited to collaborate with Keith on this masterclass about AI for non-tech people, a topic I'm really passionate about.

If you're curious about AI but don't know where to start, this is for you. We'll explain how AI works in simple terms and show you practical AI workflows you can use right away.

And no, it's not a ChatGPT training! We'll help you gain confidence using AI tools on your own and develop strategies to apply them to your specific business scenarios.

Join us on Tue, 10 Dec 2024 from 6 PM - 9 PM. Let's explore AI together!

Companies with an office in CBD – feel like collaborating? Please get in touch!


Heading to the Singapore Fintech Festival now — ping me if you want to catch up!

If you're visiting Singapore for the event, I hope you'll have a moment to check out my beautiful neighborhood Katong/Joo Chiat, it's a short bus ride away from the venue and directly on the way if you're heading into the central business district.


Generative AI? No. Apple's new top secret portable AI data center? Also no — this was my work machine at the office 20 years ago. Those were the days!

Photo: G4 Power Mac

"A capsule bed in the style of R2D2" — this was my first-ever prompt when I got access to DALL-E 2 in June 2022. Now, in November 2024, I thought I’d try it again, exactly as it was, and compare it across some of today’s image models—DALL-E 3, Flux 1.1, Stable Diffusion.

It’s fascinating to see how different (and sometimes unexpected) the results are compared to those from two and a half years ago. Even with such a simple prompt, each model interprets it uniquely, showing just how far these tools have come. I will admit that sometimes, I do miss the early models and it's a shame we no longer have them.

About

Uli's LinkedIn posts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published