Hacker Newsnew | past | comments | ask | show | jobs | submit | bpanahij's commentslogin

That’s unfortunate and certainly not what I spend my time dreaming about. My favorite use case for the elderly is as a sort of companion for sharing their story for future generations. One of our partners uses our technology to help elderly. But yeah, this kind of technology makes AI feel more natural, so we should be aware of that and make sure it’s used for good.

The response timing in the chart in the blog post shows that even with perfect precision/recall Sparrow-1 also has the fastest true positive response times.

The turn taking models were evaluated in a controlled environment with no additional cascaded steps: LLM, TTS, Phx. This matters to get apples to apples comparison: without the rest of the pipeline variability influencing the measurements.

The video conversation examples are sparrow-1 within the full pipeline. These responses aren’t as fast as sparrow itself because the LLM, TTS, facial rendering, and network transport also take time. Without Sparrow-1 they would be slower. Sparrow-1 enables the responses being as fast as they are, and with a faster CVI pipeline configuration the responses can be as fast as 430ms in my testing.


You can try Sparrow-1 with any of our PALs, or by signing up for a developer account.

Try out the PALs: they all use Sparrow-1. You can try Charlie on Tavus.io on the homepage in one of the retro retro-styled windows there.

This is a very good idea. We currently have a model in our perception system (Raven-1) that performs this partially. It uses audio to understand tone and augment the transcription we send to the conversational LLM. That seems to have an impact on the conversational style of the replicas output, in a good way. We’re still evaluating that model and will post updates when we have better insights.

You should be skeptical, and try it out. I selected 28 long conversations for our evaluation set, all unseen audio. Every turn taking model makes tradeoffs, and I tried to make the best tradeoffs for each model by adjusting and tuning the implementations. I’m certainly not in a position as the creator of Sparrow to be totally objective. However we did use unaltered real conversational audio to evaluate. I tried to find examples that would challenge Sparrow-1 with lots of variation in speaker style across the conversations.

That’s great! I also built Sparrow-0, and Sparrow-1 was designed to address Sparrow-0’s shortcomings. 1 is a much better model, both in terms of responsiveness and patience.

I haven’t tried that one yet, I’ll check it out.

Maybe infiniband is a bit more than we can handle. That technology is incredible! You are right though, we have been willing to build things we needed that didn’t exist yet, or were not fast enough or natural enough. Sparrow-1, Raven-1, and Phoenix-4 are all examples that, and we have more on the way.

As a dev myself, I see a couple of modes of operation: - push to talk - long form conversation - short form conversation

In both conversational approaches the AI can respond with simple acknowledgements. When prompted by the user the AI could go into longer discussions and explanations.

It might be nice for the AI to quickly confirm it hears me and for it to give me subtle queues that it’s listening: backchannels: “yeah”, and non-verbal: “mhmm”. So I can imagine having a developer assistant that feels more like working with another dev than working with a computer.

That being said, there is room for all modes, all at the same time, and at different times shifting between them. A lot of time I just don’t want to talk at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: