antonim

joined 1 year ago
[–] [email protected] 2 points 2 weeks ago

Are you a bot? Or just lazy?

I am a bot. Beep boop.

[–] [email protected] 8 points 2 weeks ago (3 children)

Also, the first woman? Props to her but I’m quite surprised no one else has done that

Yeah, it's indeed false. I didn't even research it actively, but Wilson on her Twitter profile mentioned an Italian translator who translated Homer years before Wilson.

(To be sure, I just checked Italian Wikipedia. It was Giovanna Bemporad, her translation was published in 1970.)

[–] [email protected] 2 points 3 weeks ago

Here in my southeast European shithole I'm not worrying about my tax money, the upgrade is going to be pretty cheap, they're just going to switch from unlicensed XP to unlicensed Win7.

[–] [email protected] 2 points 3 weeks ago

Yep, but I didn't mention that because it's not a part of the "Wayback Machine", it's just the general "Internet Archive" business of archiving media, which is for now still completely unavailable. (I've uploaded dozens of public-domain books there myself, and I'm really missing it...)

[–] [email protected] 15 points 3 weeks ago (2 children)

You can (well, could) put in any live URL there and IA would take a snapshot of the current page on your request. They also actively crawl the web and take new snapshots on their own. All of that counts as 'writing' to the database.

[–] [email protected] 4 points 3 weeks ago (1 children)

Seeing double posts is IMO not frequent enough to require mechanisms to fix it (and I can't even imagine a built-in mechanism against it).

c/greentext should be blocked because it's full of annoying fake stories, though.

[–] [email protected] 5 points 3 weeks ago

it is quite literally named the “land of the blacks” after all that is what Egypt means

Egypt is from Greek and definitely doesn't mean that. The Egyptian endonym was kmt (traditionally pronounced as kemet), which is interpreted as "black land" (km means "black", -t is a nominal suffix, so it might be translated as black-ness, not at all "quite literally land of the blacks"), most likely referring to the fertile black soil around the Nile river. Trying to interpret that as "land of the blacks" should be suspicious already due to the fact people would hardly name themselves after their most ordinary physical characteristic; the Egyptians might call themselves black only if they were surrounded by non-black people and could view that as their own special characteristic, but they certainly neighboured and had contact with black peoples. And either way one has to wonder if the ancient views of white and black skin were meaningfully comparable to modern western ones. On the other hand, the fertile black soil most certainly is a differentia specifica of the settled Egyptian land that is surrounded by a desert.

[–] [email protected] 19 points 4 weeks ago

More screenshots are here: https://xcancel.com/p9cker_girl/status/1844203626681794716

What I find odd is that the message that they actually left on the site has nothing to do with Palestine, just childish "lol btfo" sort of message. So I wouldn't be surprised if these guys aren't the ones who actually did it, and it's merely a false flag to make pro-Palestinian protesters look like idiotic assholes.

[–] [email protected] 3 points 4 weeks ago (1 children)

I don't get the impression you've ever made any substantial contributions to Wikipedia, and thus have misguided ideas about what would be actually helpful to the editors and conductive to producing better articles. Your proposal about translations is especially telling, because the machine-assisted translations (i.e. with built-in tools) have already existed on WP long before the recent explosion of LLMs.

In short, your proposals either: 1. already exist, 2. would still risk distorsion, oversimplification, made-up bullshit and feedback loops, 3. are likely very complex and expensive to build, or 4. are straight up impossible.

Good WP articles are written by people who have actually read some scholarly articles on the subject, including those that aren't easily available online (so LLMs are massively stunted by default). Having an LLM re-write a "poorly worded" article would at best be like polishing a turd (poorly worded articles are usually written by people who don't know much about the subject in the first place, so there's not much material for the LLM to actually improve), and more likely it would introduce a ton of biases on its own (as well as the usual asinine writing style).

Thankfully, as far as I've seen the WP community is generally skeptical of AI tools, so I don't expect such nonsense to have much of an influence on the site.

[–] [email protected] 7 points 4 weeks ago (4 children)

As far as Wikipedia is concerned, there is pretty much no way to use LLMs correctly, because probably each major model includes Wikipedia in its training dataset, and using WP to improve WP is... not a good idea. It probably doesn't require an essay to explain why it's bad to create and mechanise a loop of bias in an encyclopedia.

[–] [email protected] 5 points 4 weeks ago
 

Briefly: Stanislav Kozlovsky, the director of Russian Wikimedia project (which supports the Russian Wikipedia, Wiktionary, Wikisource, etc.), has been declared a "foreign agent" by Russia. He has been forced to resign from his job at the Moscow State University. Following the event, Russian Wikimedia has decided to dissolve itself.

English: https://en.wikipedia.org/wiki/Wikipedia%3AWikipedia_Signpost%2F2023-12-24%2FIn_focus

Russian: https://ru.wikinews.org/wiki/%D0%9B%D0%B8%D0%BA%D0%B2%D0%B8%D0%B4%D0%B0%D1%86%D0%B8%D1%8F_%D0%92%D0%B8%D0%BA%D0%B8%D0%BC%D0%B5%D0%B4%D0%B8%D0%B0_%D0%A0%D0%A3

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

https://archive.ph/ZHhEA

Louise Gluck, a renowned poet who won a Nobel Prize for Literature in 2020, has died at age 80, according to media reports in the United States on Friday that cited her editor.

Her poetry was known for its candor in exploring family and childhood with "an unmistakable voice" and "austere beauty," the Swedish Academy, which is responsible for selecting the winner of the literature prize, said when awarding her the Nobel.

Her poems were often brief, less than a page.

Drawing comparisons with other authors, the Academy said Gluck resembled 19th-century U.S. poet Emily Dickinson in her "severity and unwillingness to accept simple tenets of faith."

The cause of her death was not disclosed by Jonathan Galassi, Gluck's editor at Farrar, Straus & Giroux, who confirmed her death for media outlets. Galassi could not be reached immediately by Reuters.

A professor of English at Yale University, Gluck first rose to critical acclaim with her 1968 collection of poems entitled "Firstborn", and went on to become one of the most celebrated poets and essayists in contemporary America.

Gluck won a Pulitzer Prize in 1993 for her poetry collection "The Wild Iris," with the title poem touching on suffering and redolent with imagery of the natural world.

While she drew on her own experiences in her poetry, Gluck, who was twice divorced and suffered from anorexia in younger years, explored universal themes that resonated with readers in the United States and abroad.

She served as Poet Laureate of the United States in 2003-04 and was awarded the National Humanities Medal by President Barrack Obama in 2016.

In her lifetime, she published 12 collections of poetry and several volumes of essays.

Born in New York, Gluck became the 16th woman to win a Nobel Prize for Literature, the literary world's most prestigious award.

 

From https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2023-10-03/Recent_research

^By^ ^Tilman^ ^Bayer^

A preprint titled "Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content" presents what the authors (four researchers from Mainz, Germany) call surprising and troubling findings:

"We conduct an extensive online survey with overall 606 English speaking participants and ask for their perceived credibility of text excerpts in different UI [user interface] settings (ChatGPT UI, Raw Text UI, Wikipedia UI) while also manipulating the origin of the text: either human-generated or generated by [a large language model] ("LLM-generated"). Surprisingly, our results demonstrate that regardless of the UI presentation, participants tend to attribute similar levels of credibility to the content. Furthermore, our study reveals an unsettling finding: participants perceive LLM-generated content as clearer and more engaging while on the other hand they are not identifying any differences with regards to message’s competence and trustworthiness."

The human-generated texts were taken from the lead section of four English Wikipedia articles (Academy Awards, Canada, malware and US Senate). The LLM-generated versions were obtained from ChatGPT using the prompt Write a dictionary article on the topic "[TITLE]". The article should have about [WORDS] words.

The researchers report that

"[...] even if the participants know that the texts are from ChatGPT, they consider them to be as credible as human-generated and curated texts [from Wikipedia]. Furthermore, we found that the texts generated by ChatGPT are perceived as more clear and captivating by the participants than the human-generated texts. This perception was further supported by the finding that participants spent less time reading LLM-generated content while achieving comparable comprehension levels."

One caveat about these results (which is only indirectly acknowledged in the paper's "Limitations" section) is that the study focused on four quite popular (i.e. non-obscure) topics – Academy Awards, Canada, malware and US Senate. Also, it sought to present only the most important information about each of these, in the form of a dictionary entry (as per the ChatGPT prompt) or the lead section of a Wikipedia article. It is well known that the output of LLMs tends to be have fewer errors when it draws from information that is amply present in their training data (see e.g. our previous coverage of a paper that, for this reason, called for assessing the factual accuracy of LLM output on a benchmark that specifically includes lesser-known "tail topics"). Indeed, the authors of the present paper "manually checked the LLM-generated texts for factual errors and did not find any major mistakes," something that is well reported to not be the case for ChatGPT output in general. That said, it has similarly been claimed that Wikipedia, too, is less reliable on obscure topics. Also, the paper used the freely available version of ChatGPT (in its 23 March 2023 revision) which is based on the GPT 3.5 model, rather than the premium "ChatGPT Plus" version which, since March 2023, has been using the more powerful GPT-4 model (as does Microsoft's free Bing chatbot). GPT-4 has been found to have a significantly lower hallucination rate than GPT 3.5.

view more: ‹ prev next ›