AmbitiousProcess (they/them)

  • 0 Posts
  • 6 Comments
Joined 9 months ago
cake
Cake day: June 6th, 2025

help-circle

  • Turns out there’s not actually much functionality in these at all. An RFID reader and an RGB LED, whoop-de-shit.

    Where did you get that idea? They have an RFID reader and LED, yes, but they also have a speaker, microphone, accelerometer, light and color sensor, near-field magnetic position detection, and then have to fit the battery alongside all of that, all in a 2x4 brick.

    Here’s an example of what cutting-edge brick tech could look like.

    That brick has a fixed option for what it displays without needing to be entirely reflashed, requires a 4x8 powered baseplate to operate, and compared to the smart brick, doesn’t have RFID, LEDs, sound, color, or light sensing capabilities, no accelerometer, and no ability to detect other bricks near it, along with having no internal battery.

    The smart brick can play different (fully interchangeable without firmware reflashing) sounds based on nearby minifigures and interactable buttons and levers, can display lights and sounds based on rotation and movement, can change how it interacts based on nearby smart bricks, and can also be charged wirelessly and operate standalone. And of course, it’ll be able to respond to sounds later on too.

    The brick from hackaday has a display. That’s it. It’s cool, yes, but it’s nowhere close to the smart brick.




  • True, but that also depends on the circumstance.

    Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?

    I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”

    Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?

    I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.


  • I’m not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

    The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

    If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it’s probably the flu and tells me to mask up for a bit, that’s probably gonna be better than that person being told “I’m sorry, I can’t answer that”

    At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

    I feel like I’d much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

    Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else’s accidental misinformation?

    But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

    I’m not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be “you can’t know”, and they’re not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.