• 0 Posts
  • 107 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • I’m sorry; AI was trained on the sole sum of human knowledge… if the perfect human being is by nature some variant of a psychopath, then perhaps the bias exists in the training data, and not the machine?

    How can we create a perfect, moral human being out of the soup we currently have? I personally think it’s a miracle that sociopathy is the lowest of the neurological disorders our thinking machines have developed.




  • Sorry. Most of that shit has been my fault, and people like me.

    In recent times, there’s been a push to reclassify certain disabilities from … disabilities, into “neurodivergence.” in an attempt to destigmatize certain disorders, and cast them in a new light as part of human evolution.

    The idea that life is a min-maxing situation comes from the “just world fallacy”, the fallacious belief that all good and evils “must balance out”. Someone born with some profound disability might have no overarching heartwarming lesson for society to learn, and life might just be about abject cruelty.

    I don’t know if the community appreciates or hates that change, but, I’ve seen autism go from being called something quite hateful (/r) in the 1990s, to becoming a spectrum, to people working with autistic people and just calling them “different”.

    The romanticization might come from movies like Rain Man, and the few high profile savant cases (on ASD), e.g: I recall speculation that Bill Gates and Elon Musk both had Asperger’s Syndrome.

    What’s your take on this?


  • That seems to require a level of foresight and planning that most corporations don’t have. That’s almost like a blueprint for failure when some middle manager changes the scope of a project with a hard coded time limit, IMO.

    Anyone interested in not-agile development? Maybe we can call it “Ship it when it’s ready” lol



  • Naz@sh.itjust.workstoMemes@lemmy.mlGet rich quick
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    21 days ago

    I’m an AI Developer.

    TLDR: CUDA.

    Getting ROCM to work properly is like herding cats.

    You need a custom implementation for the specific operating system, the driver version must be locked and compatible, especially with a Workstation / WRX card, the Pro drivers are especially prone to breaking, you need the specific dependencies to be compiled for your variant of HIPBlas, or zLUDA, if that doesn’t work, you need ONNX transition graphs, but then find out PyTorch doesn’t support ONNX unless it’s 1.2.0 which breaks another dependency of X-Transformers, which then breaks because the version of HIPBlas is incompatible with that older version of Python and …

    Inhales

    And THEN MAYBE it’ll work at 85% of the speed of CUDA. If it doesn’t crash first due to an arbitrary error such as CUDA_UNIMPEMENTED_FUNCTION_HALF

    You get the picture. On Nvidia, it’s click, open, CUDA working? Yes?, done. You don’t spend 120 hours fucking around and recompiling for your specific usecase.


  • Naz@sh.itjust.workstoTechTakes@awful.systemswow. sensible
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Interesting. I recall a phenomenon by which inorganic matter was given a series of criterion and it adapted based on changes from said environment, eventually forming data which it then learned from over a period of millions of years.

    It then used that information to build the world wide web in the lifetime of a single organism and cast doubt on others trying to emulate it.

    But I see your point.


  • Naz@sh.itjust.workstoScience Memes@mander.xyzAnt smell
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Weird. Marijuana has an iconic, skunk-like / rotten bologna smell to me. I can smell someone smoking up to maybe 500 feet away, sometimes from the inside of my car. It’s a deeply repugnant smell.

    The strange thing being, I’ve smelled the actual flowers and the plant up close, and it just smells like grass. It only smells like shit when it’s burning, oddly enough.

    No idea why. Everything about the “natural smell” up close screams “this is a plant and can’t harm you in any way shape or form”. That specific experience made me in favor of decriminalization.






  • No, you’re right. I’m loose with language and I’m not implying the models are conscious or sentient, only that the text they produce can be biased by various internal factors.

    Most commercial/proprietary models have two internal governing agents built in:

    Coherence Agent: Ensures output is grammatically and factually correct

    Ethics Agent: Ensures output isn’t harmful and/or modulates to prevent the model engaging in inappropriate or illegal activity.

    Regardless, a judgment can be a statement that’s similar to an opinion, despite an LLM not possessing any opinions, e.g:

    “What is your favorite color?”

    A) Blue {95.7%, statistical mean}

    “Why blue?”

    A) “Because it is the color of the sky” {∆%}.

    If the model is coded for instance, to not talk about the color blue, it’ll say something like:

    “I believe all colors of the rainbow are valid and it is up to each individual to decide their favorite color”.

    That’s a bit of a non-answer. It avoids bias and opinionated speech, but at the same time, that ethics mandate by the operator has now rendered that particular model incapable of forming “judgements” on a bit of text (say, favorite color).


  • Llama-3 (Open Source) at 70B is pretty capable if you can manage to run it. I’d say it’s comparable to GPT-4, or maybe GPT 3.5.

    In second place is WizardLM-2, at 8B parameters (if you are memory constrained).

    You should run the largest model that you can fit completely in VRAM for maximum speed. Higher precision is better, FP32>16>8>4>2. 8-bit is probably more than enough for most consumer/local LLM applications/deployments, and 4-bit if you want to experiment with size vs accuracy.

    LLM Arena is a good place to benchmark the different models on a personal A/B basis, everyone has different needs and personal needs for what different models can do, from help with coding, translation, medical diagnoses, and so on.

    They all have various strengths and weaknesses presently, as optimizing a model for a specific domain or task seems to (not guaranteed, but only seems to) make it weaker in doing other tasks.


  • I’m an LLM expert, who spends most of his day fine-tuning, optimizing models, training new ones from scratch and always have the engine cover bay door open. My hands aren’t covered in grease, but my brain sure is, metaphorically speaking.

    As an expert, your confidence level rises and then drops to basically zero before returning slowly.

    I can tell you for a fact that most if not ALL of the judgements that proprietary, corporate models make, are based on the alignment values set by their programmers/corporations.

    Uncensored, unchained models, think and feel (pseudo) operate in a way that is basically alien to human cognitive function entirely.

    The way they arrive at conclusions, even mild ones is so absurd it is amazing that they can even create sentences at all, much less moral ones.

    So make sure you keep your thinking hat on and fact check anything an LLM tells you, otherwise you might believe that “removing a leg” is “a great way to lose some weight”.


  • I had something like this happen at a corp I once worked at. The CTO said they were going to outsource their entire datacenter and support staff to India.

    I literally laughed in his face and obviously, got fired (always have 6-8 months of salary as an emergency fund, ahem-).

    I won’t name the company but when half the Internet went down and a few major services? Yeah, it was that asshat driving and running between the datacenters realizing people in Bangladesh can’t do shit for you physically.

    It’s like that graph: “Say we want to fuck around at a level 8, we follow this axis, and we’re going to find out at around a level 7 or 8”


  • I’m somewhat partial to the Telvanni Mushroom kingdom (the idea of, hey, here’s an acorn, go GROW your house) but Balmora has always held a special piece in my heart for being the first “big city” I’ve felt in a video game.

    The transition to the Ashland and seeing a different biome entirely / grasslands / plains was also pretty incredible.

    Ald’ruhn’s Capitol was also novel in design with the redundant rope bridges built on the inside of the shell of a gigantic upturned horseshoe crab.

    Vivec’s cool but it’s only possible because of a demi-god’s literal meddling around with the terrain, and it’s too easy to get lost.

    Caldera’s also nice, as well as Pelagiad.

    I know I just named like ten places but Morrowind’s got a lot of diversity and biomes.