keepthepace

joined 1 year ago
[–] [email protected] 16 points 3 weeks ago
[–] [email protected] 4 points 3 weeks ago

THANK GOD YES! Imaginary matrices are a pain to multiply!

[–] [email protected] 13 points 3 weeks ago (5 children)

Is this a bad thing? I always heard that here in France we have increasing forest coverage.

[–] [email protected] 39 points 3 weeks ago (1 children)

Alexandra Elbakyan deserves a Nobel and a presidential pardon. I doubt any other person alive now has made more for science.

[–] [email protected] 4 points 3 weeks ago

A someone not in the field (CS/Machine learning) what did you expect these to be?

[–] [email protected] 5 points 3 weeks ago

But... but... these are my maths shoes!

[–] [email protected] 2 points 4 weeks ago (2 children)

Yes, PDFs are much more permissive and may not have any semantic information at all. Hell, some old publications are just scanned images!

PDF -> semantic seems to be a hard problem that basically requires OCR, like these people are doing

[–] [email protected] 7 points 4 weeks ago (4 children)

I love that PDFs are so difficult to transform into HTML, too

FYI, if that's relevant to your field, every new article published on arxiv.org now has a HTML render as well.

And on many older publications, transforming "arxiv.org" into "ar5iv.org" leads to an HTML rendering that is a best-effort experiments they ran for a while.

[–] [email protected] 2 points 4 weeks ago

You are welcome.

[–] [email protected] 16 points 1 month ago (5 children)

Me as an intern in a lab, being asked among others to review a draft

Hey, can you explain to me equation 3.1? I am not sure what N and Q refers to?

Oh that one I just copied from another paper, it is not really important to the argument.

[–] [email protected] 7 points 1 month ago (1 children)

Actually I endorse the fact that we are less shy of calling "AI" algorithms that do exhibit emergent intelligence and broad knowledge. AI uses to be a legitimate name for the field that encompasses ML and we do understood a lot of interesting things about intelligence thanks to LLMs nowadays, like the fact that training on next-word-prediction is enough to create pretty complex world models, that transformer architectures are capable of abstraction or that morality arise naturally when you try to acquire all the pre-requisites to have a normal discussion with a human.

[–] [email protected] 8 points 1 month ago (1 children)

Ain't humans cute by projecting their morality on everything they lay their eyes on?

view more: ‹ prev next ›