• 10 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle



  • Lol I’m a real life lawyer, I’m familiar with how this works, and very familiar with the rules of professional responsibility. So I’ll just lay it out for you. No demand letter I have ever written has gone out without my client first reviewing and approving it. I don’t need my clients consent for every communication (e.g., if I’m going back and forth with opposing counsel on something, I’m not getting the client’s signoff on each email), but at the very least the client is aware of and has approved the strategy I am engaging in and whatever means I’m using to do it. But a demand letter absolutely gets client approval first. Beyond that, i would never contact any person or entity on my clients behalf without their express permission.

    The simple fact is no lawyer just does things, particularly not send demand letters, cause they feel like it. Lawyers are agents of their clients, we do what we are asked to do (provided it’s legal), nothing more nothing less. The client is the boss, the lawyer is a servant. The lawyer doesn’t just do shit on their own.

    And all communications must be truthful. Period. Now you can exaggerate for tactical advantage or to press a negotiation. Maybe her lawyer coached a useful answer out of her (“would you say that account posting your flights is causing you stress?”) But you absolutely cannot just make up an injury to a client they didn’t at least suggest they might have. It doesn’t matter if it’s a complaint, a demand letter, an email, or a Christmas card. Lawyers cannot just make shit up out of whole cloth.

    Now, dem are da rules. Does everybody always follow them exactly? Usually yes, actually. I work with a lot of lawyers, and I’ve been on the opposite side of some real shitheads. But I can count on one hand the times I’ve suspected opposing counsel of breaking a rule, including dishonesty. Once actually, there was one time, in my 8 years of practice. I’m sure some scumbags do exist. But Swift is paying top dollar for a respected firm, and they will not risk their licenses or reputation on engaging in some free wheeling strategy to threaten some kid who is already internet famous for this exact thing without the clients explicit consent, especially not by making up emotional distress on behalf of Swift without consulting her first. It’s just not going to happen.



  • Just want to point out that the lawyer’s letter claims that Swift is experiencing emotional distress because of the publication of her flight details. That’s likely meant to be the hook of the lawsuit, since it’s public and true information there aren’t a lot of grounds for a lawsuit apart from the broad catchall “intentional/negligent infliction of emotional distress”.

    But as a matter of pure legal ethics (which are enforceable rules lawyers have to follow):

    1. lawyers cannot take action unless directed/authorized by their client, and
    2. lawyers cannot misreprent facts (i.e., lie) in communications.

    As to the former, Swift’s lawyer would have needed authorization from the client to send that demand. Whether the client is Swift personally or some kind of LLC or something set up to represent her business. I’d suspect the former since LLCs can’t sue for emotional distress (corps have been held to have free speech rights, but no court has gone so far as to declare them to have emotions). I think it’s very likely Swift had to personally approve this demand letter.

    As to the latter, if the lawyer didn’t at least talk to Swift about this, then the lawyer cannot plausibly claim that she’s experiencing emotional distress. No lawyer (not working for Trump) is going to bold face lie in a demand letter. If the lawyer made that claim up, and Swift later came out and was like "I didn’t know this was sent, I don’t care about the flight tracking accounts), that lawyer would be looking at disciplinary action. No lawyer is going to risk that, especially on something high profile like this. Swift had to have said something from which the lawyer could plausibly claim she was experiencing emotional distress.

    In conclusion, it is exceedingly likely that Swift not only was aware of this demand but personally approved/directed it.







  • NevermindNoMind@lemmy.worldtoMemes@lemmy.mlanother video essay
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    7 months ago

    Whenever I come across YouTube drama I’m always a little sad that I’m out of the loop and can’t participate in whatever is going on and tempted to go down a rabbit hole to figure it out, but then I realize my ignorance has saved me probably hundreds of hours of time that would otherwise be wasted worrying and arguing about things that haven’t the slightest impact on my life. Still, for my sake, enjoy your drama guys.



  • This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven’t gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn’t retain exact copies, like how a human reads a book and learns it’s contents but does not store an exact copy in their head. If the models “memorized” training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).

    For the silicone valley drama addicts, I find it curious that the researchers apparently didn’t do this test on Bard of Anthropic’s Claude, at least the article didn’t mention them. Curious.




  • They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.

    The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.

    AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.


  • Anthropic was founded by former OpenAI employees who left because of concerns about AI safety. Their big thing is “constitutional AI” which, as I understand it, is a set of rules it cannot break. So the idea is that it’s safer and harder to jailbreak.

    In terms of performance, it’s better than the free ChatGPT (GPT3.5) but not as good as GPT4. My wife has come to prefer it for being friendlier and more helpful. I prefer GPT4 on ChatGPT. I’ll also note that it seems to refuse requests from the user far more often, which is in line with it’s “safety” features. For example, a few weeks ago I told Claude my name was Matt Gaetz and I wanted Claude to write me a resolution removing the speaker of the house. Claude refused but offered to help me and Kevin McCarthy work through our differences. I think that’s kind of illustrative of it’s play nice approach.

    Also, Claude has a lot bigger context window, so you can upload bigger files to work with compared with ChatGPT. Just today Anthropic announced the pro plan gets you 200k token context window, equi to about 500 pages, which beats the yet to be released GPT4-Turbo which is supposed to have a 130k context window which is about 300 pages. I assume the free version of Claude has a much smaller context window, but probably still bigger than free ChatGPT. Claude just today also got the ability to search the web and access some other tools, but that is pro only.


  • Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won’t let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger “money talks.” Woah, mic drop right there.

    Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board’s reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.

    The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.

    Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn’t really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.


  • It’s actually kind of common among right wing religious and white supremacist types. They view countries like turkey and Russia and Israel as models. They like the idea of a religious or ethnic group having authoritarian control of a country and imposing homogeny on the population. I’ve heard white supremacist use Isreal in particular as a model for what they want the world to look like due to it’s explicit religious and ethnic political homogeny. In their view Israel is the country all the Jewish people should live in, various African nations should be where all black people live, America should be reserved for white christians, and so on. Edorgan in particular has been praised for turning religious morality into government policy, and was even the keynote speaker at CPAC because of it. Anyway, the point is they don’t agree with the underlying views, they agree with the model of authoritarianism.