I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
Also search just seems like overkill. If I type in “population of london”, i just want to be taken to a reputable site like wikipedia. I don’t want a guessing machine to tell me.
Other use cases maybe. But there are so many poor uses of AI, it’s hard to take any of it seriously.
If I understand how AI works (predictive models), kinda seems perfectly suited for translating text. Also exactly how I have been using it with Gemini, translate all the memes in ich_iel 🤣. Unironically it works really well, and the only ones that aren’t understandable are cultural not linguistic.
Oh that’s the best part, since it’s memes honestly never know if it’s even meant to be completely sensible. So even if it does hallucinate, just adds a bit of spice 🤌🤌
I also like the thought that probably billions was spent to make something that is best suited for deep frying memes
I feel like letting your skills in reading and communicating in writing atrophy is a poor choice. And skills do atrophy without use. I used to be able to read a book and write an essay critically analyzing it. If I tried to do that now, it would be a rough start.
I don’t think people are going to just up and forget how to write, but I do think they’ll get even worse at it if they don’t do it.
Our plant manager likes to use it to summarize meetings (Copilot).
It in fact does not summarize to a bullet point list in any useful way.
Breakes the notes into a headers for each topic then bullet points
The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points.
Truly stunning work. Even better with a “Please review the meeting transcript yourself as AI might not be 100% accurate” disclaimer.
Truely worthless.
That being said, I’ve a few vision systems using an “AI” to recognize product that doesn’t meet the pre taught pattern. It’s very good at this
I think your manager has a skill issue if his output is being badly formatted like that. I’d tell him to include a formatting guideline in his prompt. It won’t solve his issues but I’ll gain some favor. Just gotta make it clear I’m no damn prompt engineer. lol
I didn’t think we should be using it at all, from a security standpoint. Let’s run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I’m not going to attempt to help make it’s use better at all.
Hopefully if it’s trash enough, it’ll blow over once no one reasonable uses it.
Besides, the man’s derided by production operators and non-kool aid drinking salaried folk
He can keep it up. Lol
But if the text you’re working on is small, you could just do it yourself. You don’t need an expensive guessing machine.
Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that’s impressive. but also kind of a stupid waste, because I could’ve just tied them with my hands.
I guess this really depends on the solution you’re working with.
I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.
When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.
I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
Also search just seems like overkill. If I type in “population of london”, i just want to be taken to a reputable site like wikipedia. I don’t want a guessing machine to tell me.
Other use cases maybe. But there are so many poor uses of AI, it’s hard to take any of it seriously.
Removed by mod
If I understand how AI works (predictive models), kinda seems perfectly suited for translating text. Also exactly how I have been using it with Gemini, translate all the memes in ich_iel 🤣. Unironically it works really well, and the only ones that aren’t understandable are cultural not linguistic.
Removed by mod
Oh that’s the best part, since it’s memes honestly never know if it’s even meant to be completely sensible. So even if it does hallucinate, just adds a bit of spice 🤌🤌
I also like the thought that probably billions was spent to make something that is best suited for deep frying memes
deleted by creator
Removed by mod
deleted by creator
Removed by mod
deleted by creator
I feel like letting your skills in reading and communicating in writing atrophy is a poor choice. And skills do atrophy without use. I used to be able to read a book and write an essay critically analyzing it. If I tried to do that now, it would be a rough start.
I don’t think people are going to just up and forget how to write, but I do think they’ll get even worse at it if they don’t do it.
deleted by creator
Our plant manager likes to use it to summarize meetings (Copilot). It in fact does not summarize to a bullet point list in any useful way. Breakes the notes into a headers for each topic then bullet points The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points. Truly stunning work. Even better with a “Please review the meeting transcript yourself as AI might not be 100% accurate” disclaimer.
Truely worthless.
That being said, I’ve a few vision systems using an “AI” to recognize product that doesn’t meet the pre taught pattern. It’s very good at this
deleted by creator
I think your manager has a skill issue if his output is being badly formatted like that. I’d tell him to include a formatting guideline in his prompt. It won’t solve his issues but I’ll gain some favor. Just gotta make it clear I’m no damn prompt engineer. lol
I didn’t think we should be using it at all, from a security standpoint. Let’s run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I’m not going to attempt to help make it’s use better at all. Hopefully if it’s trash enough, it’ll blow over once no one reasonable uses it. Besides, the man’s derided by production operators and non-kool aid drinking salaried folk He can keep it up. Lol
Okay, then self host an open model. Solves all of the problems you highlighted.
Removed by mod
Removed by mod
Right, I just don’t want him to think that, or he’d have me tailor the prompts for him and give him an opportunity to micromanage me.
But if the text you’re working on is small, you could just do it yourself. You don’t need an expensive guessing machine.
Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that’s impressive. but also kind of a stupid waste, because I could’ve just tied them with my hands.
deleted by creator
I guess this really depends on the solution you’re working with.
I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.
When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.