this post was submitted on 26 Feb 2024
18 points (75.0% liked)

Programming

17270 readers
39 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Note: I call the scientists in this post by last name, not because I think I am their 'peer' but because that's how the English language works, and if I put 'Mr' before every last name, I'll sound like Consulla asking for Lemon Pledge! I am 30, will turn 31 in less than 20 days. I am the same age as the year of the Eternal September. I was born with web, but I hope Web dies before me!

Also, if you don't know who Alan Kay is, don't be distraught or feel like you're an 'outsider' (especially if you are not much into the 'science' side of programming). Just think he's a very important figure in CS (he is, you could look him up perhaps?)

Now, let me explain what Kay told me.

Basically, Kay thinks WWW people ignored works of people like Englebert and the NLS system, and it was a folly.

Dough Englebert, before UDP was even though of and TCP was a twinkle in the eyes of its creators, before Ethernet was created, back when switches were LARGE min-computers and ARPA was not DAPRA, tried his hand at sending media across a network (the aforementioned ARPA, you may know it from /usr/include/net/arpa), he even managed to do video conferencing. That was in the 1960s! He came up with a 'protocol' and this 'protocol' is not a TCP/IP stack we know today, it was completely different. I don't think he even called it that. The name of this 'protocol' was 'Online System' or NLS.

Englebert's NLS was different from the 4-lays abstraction we know and love today. It was different from web. It was like sending 'computations' across. Like a Flash clip! Kay believes that, WWW people should not have sent a 'page', they should have sent a 'process'. He basically says "You're on a computer, goddamit, send something that can be computed! Not plaintext!"

Full disclosure, yes, Kay is too brutal to Lee in this answer. I don't deny that. And his 'Doghouse' analogy is a bit reductive. But I digress.

Kay believes the TCP/IP stack is sound. I think anyone who has read a Network Theory book (like Computer Networking A Top-Down Approach which I have recently perused through) doesn't subject this. But he believes people are misusing it, abusing it, or not using it right.

In the speech which I am referring to at the question title, Kay said:

[Paraphrasing] "This is what happens when we let physicists play computer [...] If a computer scientist were to create 'web', he would do a pipeline, ending at X"

X refers to X Windowing System used in UNIX systems, it's a standard like Web is. The implementation of X11 on my system is Xorg. It's being slowly replaced by wayland.

So what does this mean? Well Kay says, 'send a process that can be piped'! Does it sound dangerous and insecure? WELL DON'T ELEVATE ITS ACCESS!

Imagine if this process-based protocol was too, called web, and the utility to interface with it was called 'wcomm', just like wget is. To view a video:

Imagine PostScript was strong enough to describe videos with. So we could get a video from Youtube, render it, and watch it:

$ ~ wcomm youtube.com/fSWmufgTp6EQ.ps | mkmpg | xwatch

So what is different here? How is it different than using a utility like ytdl and piping it to VLC?

Remember, when we do that, we are getting a binary file. But in my imaginary example, we are getting a 'video description' in form of PostScript.

====

So anyways, as I said, I am not super expert myself. But I think this is what Kay means. As Kay says himself, PostScript is too weak for today's use! But I guess, if Web was not this 'hacky', there would be a 'WebScript' that did that!

Thanks.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 8 months ago (1 children)

Good point me lad. The plain-text based approach indeed makes scraping much easier. And plus, if we send a 'process', the process can be easily malicious, even if we don't elavate its access.

Like imagine today. I tell you to wget a shell script and pipe it to shell to install my software from my remote (FPT, Git etc). This almost always needs sudo. I can't imagine how many 12 year olds would be fooled to sudo rm -rf *?

That is provided that a 12 year old would even know how to do that. I know several people who began their UNIX journey when they were as young as 7~8, but there's a reason these people earn 500k a year when they are 30! I can't imagine if your normie aunt would really feel like using a UNIX pipeline to check her emails.

HTTP 'just werks'. Derpcat told me this in 2010 when I told her I hate HTTP in 2010. IT JUST WERKS. Kay's solution, although extreemly unbaked, would not allow my mom to read her Intagram feed.

Besides money, the computation cost is also high. Kay used to use mini-computers, us poor people used micros (if i were a poor person when mini/micro distinction existed, today it's just clusters vs Jimmy's gaming rig, oh, where art thou, DEC?)

But again, nobody has given a it a thought. THAT IS THE ISSUE. Academic text on alternatives to web, AFAIK, is rare. Part of it is the 'just werks' thing, but also, academia just does not care about web.

I think if people who are smarter than me would give this a 'thorough' thought, they will come up with a good solution. Web won because it was 'open', it was easy to navigiate, as opposed to pesky newsgroups and the such. You can still go to the first website to see this: http://info.cern.ch/ (browse this with Lynx or W3M, it's the best way to do it! Don't use FF or Chrome).

I dunno!

[–] [email protected] 2 points 8 months ago (1 children)

What I think is that HTML was indeed easy to write, back in the days when everyone used plaintext editors. If you would need a dedicated editor you could do only hope your computer vendor would provide it. There were, iirc, a few OSes other than Windows.

One good thing about HTML is that the browser can ignore tags it doesn't know, and still present the correct body text to the user. That's crucial because open standardization was rare at the time. Instead, there was the browser war. IE invented horrific tags no other browser can understand. Netscape even invented JavaScript while IE was stuck with static text.

The CSS also came, with the idea that HTML should focus on text information while CSS should do so on the visual design. And they did a really good job on making it hard to. center the text...

[–] [email protected] 2 points 8 months ago

The CSS also came, with the idea that HTML should focus on text information while CSS should do so on the visual design.

My biggest beef with CSS is that it's on the wrong end of the wire. What ever happened to the idea that the client is in charge of rendering?

Or maybe it's that the clients have abdicated their responsibility: the browser included with OS/2 Warp had a settings page that let me set the display characteristics of every tag in the spec. Thus, every site looked approximately the same: my font, my sizes, my indents, my spacing, whether images displayed (or even downloaded, I think) and whether text split at an image or wrapped around it. And it's not like I had to customize everything for each site: if you used a tag my browser recognized, my browser took over.