I remember the first time i tried LLM, it was back in early 2023, when chatGPT was super hyped. I had to bought an UK phone number and a VPN simply chatGPT was not available in my country. I also set up some accounts for my boss and coworkers.
And oh boy it was magical. The idea of a chatbot was not new at all, we had already had chatbot way before that, and i think those bots were just some regex matching combined with if-else hell. Those are very limited and show no intelligent at all. Their answers were all over the place, sometime even worse then elementary student. Then we had Alexa/Google Assistant/Siri, those are more advance, but most of the time i felt like they are just search engine wrapper, with some fun functions like setting timer, playing song or asking the weather. Still, they were nothing magical after playing with them for a while.
But chatGPT was different. At some moments i felt like i was talking to real human. It can do nothing and everything: You can not ask it to start a time like Alexa, or what the weather like today like GG assistant, but you can ask it to write a python code to generate a protobuf file from a C header file, or ask it how the Japan economy is from 80s to 2020s. The first “public” version of chatGPT was pretty bad, the code they wrote barely worked(or not worked at all), and maintaining it is a big no. It halluciate a lot, so don’t even think using them in your professional codebase. Then i tried CoPilot(before Microsoft had the free tier), they were autocomplete-on-steroid, they could detect the pattern in your code and finish what would you write, good if you are writting repeat pattern like logging, bad for anything else.
In 2 years, AI/LLM has advanced, like a lot. Their code is now better, i could say on par with junior developers, but in some way, better than senior dev thanks to being trained on countless “legal” repoes. But still, it’s a hit or miss, especially if you are working with less popular language/framework(Verilog/VHDL guys in shatter). Apart from coding, they are now even can do Internet search themself, so there would be no outdated data in their answer(let’s hope they are smart enough to know when to do query instead of halluciating)
Some people dislike the rise of using LLM/AI, and i understand that, some have good reasons like “The answer is too unreliable” or “It would dumb yourself down”. etc. They are right.
But this has always been the case when new tool is invented: The invention of paper/ink means human don’t have to remember the whole book anymore. Or the invention of power tools means human become less muscular. We gain something, we lose something. It’s on us to decide if it’s worth it. And I decided that i can accept the tradeoff of AI/LLM. We lose the “authority” of the stuff LLM generates, but we get them fast.
AI/LLM would not go anywhere in the future, better to adapt to it instead of trying to resist it. We need to learn about their short coming to work around that. But “everything is toxic it just depends on the dose”, too heavily depend on LLM/AI is bad, and i won’t try to argue that. If you are one of those “vibe coder”, I suggest you stop doing that, at least at this level of LLM we are having. Nobody can say how good LLM would be in the future.(But “vibe coder” maybe good for me, someone has to clean up the mess after all, i could make a killing out of it).
So, i do think LLM is a good thing for us. Just don’t over do it.
PS: I think it’s time for something more personal, maybe next post would be something i am working on.