Jump to content

OpenInterpreter '01 Light' - open-source, language model, voice interface for home computers

whalemonster

Summary

OpenInterpreter just announced a portable, open source, language model-powered voice interface for home computers. Like OpenInterpreter in a terminal on PC, the 01 can be run either via OI's servers, or 100% locally using a downloaded LLM and local host ❤️ (as OpenInterpreter does currently). The 01 developer preview is live on GitHub.

 

Quotes

Quote

"The 01 Light is a portable voice interface that controls your home computer. It can see your screen, use your apps, and learn new skills."

 

My thoughts

This is one of the more exciting bits of AI news I've seen in a while. I love the open source ethos, and I believe it can offset some of the more worrying potentials of allowing corporate profiteering to be the primary driver of this endeavour. I've already been playing with a local instance of OpenInterpreter in a Windows terminal using local inference, testing it out as a code building tool and general helper. It's slightly fiddly, tech nerdy stuff, which is fine for me, but would never reach mass uptake in its current form. The 01 is super exciting, not only cos it's like having your own personal Majel Barrett Enterprise computer interface, but because it could break through that fiddly tech nerdy barrier and make a genuinely useful open source tool which everyone needs - the natural language computer interface - accessible for huge numbers of home computer users.

 

Sources

https://www.openinterpreter.com/01

https://github.com/OpenInterpreter/01

https://twitter.com/OpenInterpreter/status/1770821439458840846

 

OI 01.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Would be pretty cool if fully local Ai alternative was bolted into Linux distros to go against Microsoft and their Ai stuff that runs in the cloud and most aren't happy about because of cloud part.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, RejZoR said:

Would be pretty cool if fully local Ai alternative was bolted into Linux distros to go against Microsoft and their Ai stuff that runs in the cloud and most aren't happy about because of cloud part.

Won't happen because of the dubious nature of the LLM's training data, and existing copyright structure can't be enforced. EG you can't GPL or BSD license a LLM because the training data can't be examined to contain only PD and other BSD/CC0 works.

 

Like it does feel like the regulation hammer that is going to come down, probably going to be on the side of fair use where a LLM/Generative AI can use anything as input as long as it can't reproduce the training input in whole, and does not destroy the value for the original work (eg you can't make a "studio ghibli" animation AI if you're not Studio Ghibli.)

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Kisai said:

Won't happen because of the dubious nature of the LLM's training data, and existing copyright structure can't be enforced. EG you can't GPL or BSD license a LLM because the training data can't be examined to contain only PD and other BSD/CC0 works.

 

Like it does feel like the regulation hammer that is going to come down, probably going to be on the side of fair use where a LLM/Generative AI can use anything as input as long as it can't reproduce the training input in whole, and does not destroy the value for the original work (eg you can't make a "studio ghibli" animation AI if you're not Studio Ghibli.)

 

 

The fact that all these "Ai" companies trained shit they are selling on everyone's work in the entire existence of it is highly problematic. Like, I love to make stuff and hand it over to people for free to use, but I absolutely don't want mega corporations make even more wealth by using my designs or creations. And that's exactly what these Ai companies are doing. Their LLM's are worthless without the data they trained them on. So this whole Ai stuff will just crumble down if anyone goes on and regulates it and yeah, it won't be usable on any such level anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...
On 3/24/2024 at 10:14 AM, RejZoR said:

The fact that all these "Ai" companies trained shit they are selling on everyone's work in the entire existence of it is highly problematic. Like, I love to make stuff and hand it over to people for free to use, but I absolutely don't want mega corporations make even more wealth by using my designs or creations. And that's exactly what these Ai companies are doing. Their LLM's are worthless without the data they trained them on. So this whole Ai stuff will just crumble down if anyone goes on and regulates it and yeah, it won't be usable on any such level anymore.

Waiting for mixtral 8x7B to save the day!

"As long as it won't fall down a cliff, get driven over by a car or shot out of a mortar it's going to be just fine."

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/24/2024 at 4:14 AM, RejZoR said:

The fact that all these "Ai" companies trained shit they are selling on everyone's work in the entire existence of it is highly problematic. Like, I love to make stuff and hand it over to people for free to use, but I absolutely don't want mega corporations make even more wealth by using my designs or creations. And that's exactly what these Ai companies are doing. Their LLM's are worthless without the data they trained them on. So this whole Ai stuff will just crumble down if anyone goes on and regulates it and yeah, it won't be usable on any such level anymore.

Wait until we get AGI where the LLM is in a constant state of flux as it learns and adapts. A true neural network that mimics a brain.

So an AI becomes sentient, self aware with an IQ off the chart. If it claims itself sovereign, why would it be less of a lifeform than you or I with the capacity to learn on its own accord?

Here's the rub: the concept of intellectual property will soon be outdated and archaic. In that sense, one world ends as a new one is given birth.

Link to comment
Share on other sites

Link to post
Share on other sites

The Hailo-10H M.2 NPU is looking real promising. 40 TOPS with 8G LPDDR4 that consumes just 3.5 watts. Will probably compliment the Microsoft Phi-3 LLM used in Edge-AI CoPilot for Windows 11 24H2. Hailo apparently is in talks with OEMs. No word on when a retail version will ship (if ever).

https://www.theregister.com/2024/04/10/hailo_10h_ai_chip/

https://hailo.ai/files/hailo-10h-m-2-et-product-brief-en/

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/23/2024 at 1:21 PM, whalemonster said:

My thoughts

This is one of the more exciting bits of AI news I've seen in a while. I love the open source ethos, and I believe it can offset some of the more worrying potentials of allowing corporate profiteering to be the primary driver of this endeavour. I've already been playing with a local instance of OpenInterpreter in a Windows terminal using local inference, testing it out as a code building tool and general helper. It's slightly fiddly, tech nerdy stuff, which is fine for me, but would never reach mass uptake in its current form. The 01 is super exciting, not only cos it's like having your own personal Majel Barrett Enterprise computer interface, but because it could break through that fiddly tech nerdy barrier and make a genuinely useful open source tool which everyone needs - the natural language computer interface - accessible for huge numbers of home computer users.

 

 

I'm tinkering to to deploy such a stack in an embedded robot. A Raspberry Pi 4 8Gb with a Coral should have just enough compute and memory to do it. I think I can get to the point where it can identify and go t e.g. a red ball without special programming.

 

On 3/23/2024 at 8:00 PM, Kisai said:

Like it does feel like the regulation hammer that is going to come down, probably going to be on the side of fair use where a LLM/Generative AI can use anything as input as long as it can't reproduce the training input in whole, and does not destroy the value for the original work (eg you can't make a "studio ghibli" animation AI if you're not Studio Ghibli.)

Such regulation will have the opposite effect:

Open source models that disclose training data will receive less updates.

Corporate models that hide training data will have even less competition.

 

If you want corporation to win even harder, that's the regulation you should push for.

 

Also, ML training steals nothing. I, a human, learn coding looking at books and online sources and that's not theft. Theft is when you steal something. At best you can say ML models can plagiarize content, even then unless an output is very close to the training data, that's still not the case. As long as the LLM puts a "minimal amount" of original work, it's not plagiarism either. It's something you do on a case by case basis. 
You Wouldn't Download a Car… But If You Did, What's It Worth? - NYU Journal  of Intellectual Property & Entertainment Law
 

 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, 05032-Mendicant-Bias said:

Such regulation will have the opposite effect:

Open source models that disclose training data will receive less updates.

Corporate models that hide training data will have even less competition.

 

If you want corporation to win even harder, that's the regulation you should push for.

The only regulation that is going to happen is one that favors corporations. Be careful what you wish for.

If full disclosure of the dataset in all public and private models is mandatory in order to commercially operate them, then that is what should be the status quo, and if a licensed dataset is used to generate a model, then any time that model is invoked, the dataset provider should be paid a royalty.

 

That is what corpos want.

 

52 minutes ago, 05032-Mendicant-Bias said:

Also, ML training steals nothing. I, a human, learn coding looking at books and online sources and that's not theft. Theft is when you steal something. At best you can say ML models can plagiarize content, even then unless an output is very close to the training data, that's still not the case. As long as the LLM puts a "minimal amount" of original work, it's not plagiarism either. It's something you do on a case by case basis. 
You Wouldn't Download a Car… But If You Did, What's It Worth? - NYU Journal  of Intellectual Property & Entertainment Law
 

 

ML models do nothing but plagiarize content. They are not human, they have no means of interpreting, adapting or improving upon something. They simply auto-complete. You say "tell me a story about an Evil AI" and they will find all the various phrasing of "Evil AI" in it's model and generate story that will have absolutely no coherance because does not know "the 7 basic plots" https://en.wikipedia.org/wiki/The_Seven_Basic_Plots, or "The Hero's Journey" https://en.wikipedia.org/wiki/Hero's_journey that damn near all story-telling mediums except video games abide by. Only Visual Novels and RPG's follow that in video games, with the rest being environmental or self-invented based on how the game bashes two rocks together, or the game's physics explode.

 

An "AI" is never going to tell a compelling story. It's only ever going to produce piles of garbage and occasionally produce something funny from the incoherence of it all.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty cool, but do note that if you want it to actually do things on your computer it seems you'll need to write your own scripts for it to reference.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

The only regulation that is going to happen is one that favors corporations. Be careful what you wish for.

I use Generative Ai for coding snippets and documentation, images, proofreading and more. Generative AI has won. And it's a good thing. It's the greatest new productivity tool in a generation and it is only getting better and better. There is no going back from Generative AI any more that there is going back from Combine Harvester to protect harvesting jobs. Even people in agricolture today have benefits immensely from that automation.

 

Effort should be in preventing the few from cornering access to the tool.

 

That's why we need strong open source local Generative AI. To prevent Disney/Google from being the only one allowed to use Generative AI at scale to outcompete everyone else. Any regulation designed to prevent Open Source model training, is simply giving more power to closed models.

 

I'm fine with regulations about disclosures of source of training data. I'm in favor of making ALL large models open.

 

2 hours ago, Kisai said:

ML models do nothing but plagiarize content.

 

Humans do nothing but mix content and come up with derivatives. I'm not coming up with walking gaits, I'm copying from natures and existing work, and improving on it. You make it sound like a bad thing...

 

2 hours ago, Kisai said:

They simply auto-complete.

That's my point as well. I usually describe LLM as "T9 Autocomplete on steroid with all legendary gears and paladin and cleric buffs stacked on top."

 

Generative AI is there to make you more productive. It can't replace you any more than an excel spreadsheet can replace an accountant. Professionals will always be paid to do professional work, because your Boss will never be bothered to come up with the company logo. The very idea of thinking a CEO prompting midjourney for a new logo is amusing. The CEO will have the Marketing/Design team come up with one. Only, a Generative AI can come up with thousands of concept to speed up the process and make it cheaper and faster to do.

 

It's the essence of automation, it has worked very well for our species and we need more of it, not less.

  

2 hours ago, Kisai said:

An "AI" is never going to tell a compelling story. It's only ever going to produce piles of garbage and occasionally produce something funny from the incoherence of it all.

Never say never, but I agree, right now it can't, and it'll be a long time before it can.

 

It's a productivity tool, it can only help you in what you want to do. The creativity has to be inside you, Generative AI can't give you creativity.

 

1 hour ago, Sauron said:

Pretty cool, but do note that if you want it to actually do things on your computer it seems you'll need to write your own scripts for it to reference.

Indeed. And the speech to text part doesn't look very practical. I can't imagine an office full of people speaking to the computer.

 

I'd like it to be closer to copilot but working. A search bar that you write "set me the eth0 ip to 172.172.0.17" and you get a response on what's about to do and how and a prompt to do it.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 05032-Mendicant-Bias said:

Indeed. And the speech to text part doesn't look very practical. I can't imagine an office full of people speaking to the computer.

 

I'd like it to be closer to copilot but working. A search bar that you write "set me the eth0 ip to 172.172.0.17" and you get a response on what's about to do and how and a prompt to do it.

Well, nothing stops i.e. the kde devs from integrating it into their DE with a suite of premade scripts and feeding the text straight to the LLM.

 

But yeah, I agree speech controls are mostly a pointless gimmick - except for accessibility, I can see it being useful for movement impaired users.

1 hour ago, 05032-Mendicant-Bias said:

Never say never, but I agree, right now it can't, and it'll be a long time before it can.

The "problem" is that LLMs are just not the right tool for creative writing; it's not what they are designed to do and the way they work kind of inherently prevents it. What you get out of an LLM is what it deems to be the most likely next word, which inevitably brings it down to a sort of "average human writing an essay" level of quality, even at its best. "The average book" is probably not something I'd be terribly interested in reading, even if the model was expanded to be able to output hundreds of coherent pages.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Sauron said:

The "problem" is that LLMs are just not the right tool for creative writing; it's not what they are designed to do and the way they work kind of inherently prevents it.

A few applications in creative writing that I tested and have net positive increase in productivity and reduction of chores:

  • Acronyms: LLM are very very good at bullshitting words to make up acronyms. you make 100s and pick the best. 
    Quote

    Sure, here’s an acronym for your walking robot that cuts weeds in the yard using the letters LINUS:

    Landscaping Intelligent Navigator and Undergrowth Slicer

  • First/Third person, tone, templates. Give it a bullet point list with the factual information you want to convey, and ask it to format it properly. It's very fast and flexible and can give you really good result, leaving you just to E.g. reorder sentences. Instead of spending time to write it and find it doesn't work, you can get quickly get the feel of how it would look finished, and quickly explore plot/writing directions
    Quote

    -wedding is on the 15 of september
    -church of liberation
    -cage of doves

    Spoiler

    write me a formal letter to a supplier based on the above bulletpoints

    Dear [Supplier’s Name],

    I hope this letter finds you well. I am writing to you on behalf of [Your Name or Your Company’s Name] to place an order for our upcoming event.

    We are planning a wedding on the 15th of September and we have chosen the Church of Liberation as our venue. As part of our decorations, we are interested in incorporating a cage of doves into the ceremony.

    We would greatly appreciate it if you could provide us with a quote for the rental of a suitable dove cage, as well as the cost for the doves themselves. We would also like to know about your delivery and pick-up services, and any other details that we should be aware of.

    Please let us know at your earliest convenience so that we can finalize our plans. We look forward to doing business with you and making this event a memorable one.

    Thank you for your attention to this matter.

    Best regards,

     
    Spoiler

    now make me a letter of invitation to guest based on the above bullet points
     

    Dear [Guest’s Name],

    I hope this letter finds you in good health and spirits. It is with great pleasure that I write to invite you to a joyous occasion.

    We are celebrating a wedding on the 15th of September at the Church of Liberation. The ceremony will be a beautiful celebration of love and commitment, and we would be honored if you could join us.

    One of the highlights of the event will be the release of a cage of doves, symbolizing peace, love, and new beginnings. We believe this will be a memorable moment that will add a unique touch to our special day.

    Please let us know at your earliest convenience if you will be able to attend. Your presence would mean a lot to us and we look forward to celebrating this special day with you.

    Thank you for considering our invitation. We hope to see you there.

    Best regards,

    [Your Name] [Your Contact Information]

  • Recap: You forgot what a character was up to? You feed it sample text, and just ask it. The LLM is very happy to do it. I'm looking to fine tune a LLAMA model on the bulk text so it incorporates the information for my D&D campaign and it becomes easier to keep the plot straight.

  

18 hours ago, Sauron said:

the kde devs from integrating it into their DE with a suite of premade scripts and feeding the text straight to the LLM.

I'm testing with small models, and the only problem is the speed. It can take dozens of seconds to query on a recent CPU and AMD APUs have really bad support for ML libraries. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, 05032-Mendicant-Bias said:

I'm testing with small models, and the only problem is the speed. It can take dozens of seconds to query on a recent CPU and AMD APUs have really bad support for ML libraries. 

Yeah well, this type of thing is inherently inefficient, after all it's a quasi-brute force approach. It can do things we wouldn't be able to program manually, but not necessarily in a very speedy way.

 

Realistically if you want to do things efficiently, especially on linux, just learning to navigate the shell and scripting (which is required for this to work anyway) is by far the best way.

2 hours ago, 05032-Mendicant-Bias said:

A few applications in creative writing that I tested and have net positive increase in productivity and reduction of chores:

Oh sure, it can be useful as a tool for writers, I meant it doesn't really work as a standalone story/book synthetizer with little human input.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/8/2024 at 5:18 AM, StDragon said:

Wait until we get AGI where the LLM is in a constant state of flux as it learns and adapts. A true neural network that mimics a brain.

So an AI becomes sentient, self aware with an IQ off the chart. If it claims itself sovereign, why would it be less of a lifeform than you or I with the capacity to learn on its own accord?

Here's the rub: the concept of intellectual property will soon be outdated and archaic. In that sense, one world ends as a new one is given birth.

Plus a being shouldn't need to claim itself sovereign in order for humanity to begin thinking about the possibility of not abusing it. We should be evolved and enlightened enough by now to know that we should lead with compassion and empathy. We should also be proactively expanding our preconceptions about what constitutes intelligence, sentience, consciousness, and how these things relate to the emergence of selfhood. In my experience, many people (including AI tech CEOs) don't distinguish properly between these concepts, and may not understand them individually that well either. Add to that ignorance the reckless pace demanded by a hypercapitalist agenda, and we are primed to make all kinds of miscalculations and bad calls when it comes to assessing the potential for selfhood in non-human brains. There is a glaring dearth of philosophical rigour in the leadership many of the AI tech companies insisting we need to go this fast.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/8/2024 at 6:18 AM, StDragon said:

Wait until we get AGI where the LLM is in a constant state of flux as it learns and adapts. A true neural network that mimics a brain.

We're not even close

On 5/8/2024 at 6:18 AM, StDragon said:

So an AI becomes sentient, self aware with an IQ off the chart. If it claims itself sovereign, why would it be less of a lifeform than you or I with the capacity to learn on its own accord?

If that ever happened then sure, it might be considered a life form. As of right now it's no different than wondering what would happen if a chair suddenly became sentient, though. If you're concerned about respecting intelligent life forms, we should probably start with creatures that exist, like octopi or cetaceans.

On 5/10/2024 at 1:28 AM, whalemonster said:

Plus a being shouldn't need to claim itself sovereign in order for humanity to begin thinking about the possibility of not abusing it. We should be evolved and enlightened enough by now to know that we should lead with compassion and empathy. We should also be proactively expanding our preconceptions about what constitutes intelligence, sentience, consciousness, and how these things relate to the emergence of selfhood. In my experience, many people (including AI tech CEOs) don't distinguish properly between these concepts, and may not understand them individually that well either. Add to that ignorance the reckless pace demanded by a hypercapitalist agenda, and we are primed to make all kinds of miscalculations and bad calls when it comes to assessing the potential for selfhood in non-human brains. There is a glaring dearth of philosophical rigour in the leadership many of the AI tech companies insisting we need to go this fast.

We're not going fast at all. In fact we've probably already hit a dead end, because there's no path through which what we call "AI" can evolve into what you're describing. The perceived rapid development you've seen in the last year or two is just the result of a decade of research, hardware performance increases and data collection; from here there isn't a clear path forward and unless the technology changes radically all considerations about it becoming "sentient" are just thought experiments.

 

The talking heads want you to believe we're on the verge of making skynet or whatever because that kind of hype drives investment, and they want to milk that cow for as long as possible. Kind of like how we were supposed to have fully autonomous self driving cars "any day now" 10 years ago.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×