Hey there, readers! While brainstorming a title for my new series, I stumbled upon an article that left me feeling completely ignorant. But in a good way. It’s a fascinating read, and I can’t wait to share it with you.
Let’s dive into the intriguing world of MuL, or Machine Unlearning. This concept involves the deliberate removal of knowledge or skills from a machine-learning model. Google Deepmind is currently navigating the ethical challenges in this field, recognizing that ignoring these issues could lead to a future reminiscent of a Terminator movie.
At least, that is my conclusion after reading all the facts exposed in the article about privacy, copyright, and safety, the last being the keyword of having my imagination go wild (or perhaps not that wild).
This year, I’ve been busy exploring various AI tools, each with unique capabilities and applications. From Buildpad, a platform for MVP building and training machine learning models, to Blaze, a tool generating creative content, and even Sintra for optimizing AI workloads, and Claude Haiku, a language model. While diverse in their end use, these tools all share a common need for human feedback and explanation to function effectively.
These AI tools, fueled by the collective knowledge of their creators and the World Wide Web, are a testament to human ingenuity. However, their true potential is unlocked when they are enriched with the unique insights of the human mind. This MuL underscores human feedback’s crucial role in AI’s ongoing evolution yet avoids Schwarzenegger’s nephews and other heeps of heavy metal to come down at us.
I understand this is old food for most of you, yet MuL wasn’t right?
Hence, as an ignorant in the matter, I feel most useful through my inquiring, unsatisfying curiosity; in conclusion, I prefer to be curious and ignorant than a dumb elitist (read square mind) from Uni. But let that be a story for another day.
May harmony find you,
Irena Phaedra
