• 2 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle




  • Isn’t the whole point of something like End-to-End Encryption so that not even the company themselves can read your messages?

    In that case it wouldn’t matter even if they did turn the info over.

    Edit: I read more into the page you linked. Looks like those NSLs can’t even be used to request the contents either way:

    Can the FBI obtain content—like e-mails or the content of phone calls—with an NSL?

    Not legally. While each type of NSL allows the FBI to obtain a different type of information, that information is limited to records—such as “subscriber information and toll billing records information” from telephone companies.







  • Some additional info the article doesn’t address or skims over:

    The accounts were suspended for 3 months.

    They only suspended accounts that were overly abusing the system. Players that duped on accident, or a small number of times were not punished except for the removal of some of their in-game currency and maybe a ship or two that they bought with the earnings they made from duping.

    This is the first time that Star Citizen players have had a wave of suspensions like this for an exploit.

    This is most likely because of how this exploit affected the servers. In Star Citizen, abandoned ships stick around forever on a particular instance, so other players would need to hijack/tow/destroy/salvage them to get rid of them. The players abusing this exploit would duplicate ships with cargo (that could be sold) as fast as they possibly could, leaving more ships behind than what the servers can normally handle well.

    This also happened around the time of a free fly event where anyone could try out the game for a bit without having to pay. So the game wasn’t performing as well as it could have been during this event. Although, tbh, this game usually struggles during free flight events anyway.








  • There’s a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.

    LLMs can be trained on specific characters and backstories, or even “types” of characters. If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do. But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to “pretend” to be some character, then it’s going to be terrible like you say.

    Now, this won’t work very well for games where you’re trying to tell a story like Baldur’s Gate… instead this is better for more open world games where the player is interacting with random characters that don’t need to follow specific scripts.

    Even then it won’t be everything. Just because an LLM can say something “in-character” doesn’t mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.

    If a studio is able to do it right, this has game changing potential… but I’m sure we’ll see a lot of rushed work done before anyone pulls it off well.