Tuesday, July 8, 2025

Blogaround

Links not related to the antichrist:

1. How Classic Star Trek Actually Missed the Point About Racism (June 25, 43-minute video) "Captain Kirk never felt it was necessary to pull McCoy aside and say, 'Hey, man, next time you disagree with Spock, maybe try not to mention what color his blood is.'"

2. Israel kills over 300 Palestinians in 48 hours as Gaza runs out of graves (July 3, via)

Also from Gaza: Knives, bullets and thieves: the quest for food in Gaza (July 6) "At the distribution site, I pushed people aside and grabbed whatever food I found tossed on the ground under torn cardboard boxes: cooking oil, biscuits, a bag of rice that had been torn open and was mixed with sand from the ground. I didn't care. It's food. I can wash it."

The destruction of Palestine is breaking the world (July 6, via) "This collapse began with the liberal world’s lack of resolve to rein in Israel’s war in Gaza. It escalated when no one lifted a finger to stop hospitals being bombed. It expanded when mass starvation became a weapon of war. And it is peaking at a time when total war is no longer viewed as a human abhorrence but is instead the deliberate policy of the state of Israel."

3. AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty (July 5, via

I've started using Deepseek at work, asking it for help with programming tasks. There are some things it's good at, but also, sometimes it just has no idea what it's talking about, and it's not immediately obvious when those times are. Its output reads as equally authoritative and confident regardless. One trick I've figured out is, if I read Deepseek's output and I feel like "I didn't quite understand this, let me read it again more carefully"- that's a red flag, it's likely that reading more carefully is just going to be a waste of time.

See, normally when you encounter text that sounds like it knows what it's talking about, but you can't quite understand it, it DOES help to spend more time reading it carefully. Because a person wrote it, and they had an idea of what they wanted to say, and they made decisions about what details to include. If it turns out their answer is not useful for you, that might be because they were working on a different thing than what you are doing, or they were wrong about it, or they made typos when writing. But for an AI-generated text, the concepts of having an idea it's trying to communicate, and being right or wrong about it- these concepts don't have any meaning when we're talking about AI-generated text. The LLM simply generates something that comes across as the sort of thing that would be an answer to your question. That's it. 

I've found that it's especially misleading when you're trying to write code to do something, which could be done in several different ways, with different groups of functions applying to each way (and some of them are deprecated, and some of them only work with certain versions of certain software packages, etc)- the LLM will sort of combine these seamlessly so you don't *at all* get the understanding that, if you use *this* function, you need to also be using *this other* function, but not *that other* one.

Like this one time Deepseek gave me some code to use that had 2 different functions in it, and then later as I continued to debug, Deepseek brought up the point that if I'm using this one approach, I shouldn't use this other approach, and I'm like "you literally gave me code that said to do it that way" and Deepseek is all like "you're right to point that out!" and then I got mad at it, but there's nothing to be gained from arguing with Deepseek, so don't do that. It's never going to actually understand what it did wrong and do better.

Anyway, sometimes it's really helpful. But when it's not helpful, it's very deceptive, because the output text is so friendly and easy to read and makes you feel like you're just 1 step away from solving your problem. This is the sort of thing that I think we as a society need to spend more time really thinking through what the use-case is, and what strategies we should use so it's actually effective and useful. Education about best practices.

Because right now, people are treating LLMs like the output they give you is the answer to your question, and oh yikes, no, that's not what it is. The output is the sort of thing that we might imagine someone might say in response to your question, but that has an tenuous connection to what's actually true and/or makes sense.

4. The ultra-selfish gene (2024) Really interesting article about how gene editing can be used to wipe out the subspecies of mosquito which carries malaria- which could potentially save hundreds of thousands of people's lives every year. But, releasing an edited gene into the wild, to wipe out a whole subspecies- uh, better think carefully about if you really want to do that. A lot of unexpected things could go wrong. This article takes those risks seriously and talks about strategies to reduce the risk. There is a lot of potential for this technology to be used for good.

5. A Rough Ride: ‘Dirty’ Workers Stand Up to Subway Stares (July 4) "This leaves manual workers who need to commute in dusty clothes in a bind: they can either take a seat and potentially face the ire of fellow passengers, sit on the floor and risk injury, or stay on their weary feet throughout the journey."

6. I released my game (July 5) This is a cool game if you like math~

---

Links related to the antichrist:

1. 4 things to know about the vaccine ingredient thimerosal (July 6) "Fiscus, from the Association of Immunization Managers, says the committee's decision to only recommend single-dose flu shots without thimerosal shows that it is willing to make a decision without following protocol and considering the scientific evidence. 'Is this now going to be the standard?' she says. 'That's very concerning if that's where this is heading.'"

2. Arlington woman detained by ICE after her honeymoon speaks publicly for the first time (July 3) "'I did lose five months of my life because I was criminalized for being stateless,' she said. 'Something that I absolutely have no control over. I didn't choose to be stateless, I didn't do a crime that made me stateless. I had no choice.'"

3. Abrego Garcia says he was severely beaten in Salvadoran prison (July 3)

4. UPenn updates swimming records to settle with feds on transgender athletes case (July 2) "The University of Pennsylvania on Tuesday modified a trio of school records set by transgender swimmer Lia Thomas and said it would apologize to female athletes "disadvantaged" by her participation on the women's swimming team, part of a resolution of a federal civil rights case." They're really going to a lot of effort just to be mean to Lia Thomas specifically. 

5. All the children of the world (July 7) "That committee also organized our annual Missions Conference, which began each year with young people from our church parading through the sanctuary carrying the flags of every nation our missionaries served. “World Missions” was an integral part of that church’s identity and the faith taught, learned, and lived there."

6. Projected Mortality Impacts of the Budget Reconciliation Bill (June 3, via) "Including that impact, the researchers project that these changes will result in over 51,000 preventable deaths."

7. Bombshell report alleges El Salvador disclaimed responsibility for those U.S. sent to CECOT (July 8) "El Salvador, per the U.N. working group’s report attached to the filing, acknowledged that it had 'facilitated the use of the Salvadoran prison infrastructure' by the U.S. — but also stated that, '[i]n this context, the jurisdiction and legal responsibility for these persons lie exclusively with the' U.S."

No comments:

Post a Comment

AddThis

ShareThis