this post was submitted on 06 Nov 2023
57 points (100.0% liked)

Technology

37604 readers
136 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (2 children)

I get that this is expensive. However, it should also work with RAM if you accept slower speeds I guess. The question is of course if it’s still usable then.

[–] [email protected] 4 points 10 months ago

Most current locally hosted software has some option to offload to RAM, CPU, and disk. VRAM is fastest, but RAM and CPU offloading lets you cut down to less than 4GB VRAM for certain applications, at plenty reasonable speed.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

GPT-4 is already kinda slow - it works best as a "conversational" tool where you ask follow up questions and clarify things that have already been said. That's painful when you have to wait 10 seconds for a response. I couldn't imagine it being useful if it was minutes.

[–] [email protected] 1 points 10 months ago

Having to wait 10 seconds for a response is "painful"?