NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Neural Graffiti – Liquid Memory Layer for LLMs (github.com)
cgadski 4 hours ago [-]
Where x is the final hidden layer of the base model, the idea here is to steer outputs in some direction by adding a vector y. More specifically, y is an exponential moving average over a sequence of vectors W(z_t), where z_t are some sort of context vectors and W is a linear map.

Except, the linear map W is just set to a random initialization, so it won't work for obvious reasons in its current form. (I guess this is why there is no example of its output. I'm guessing it was vibe-coded?) Also, since the intervention is only happening at the last hidden layer, I can't imagine this would really change how the model "thinks" in an interesting way. Like, yeah, you can absolutely make a model talk about dogs by adding in control vector for "dogness" somewhere.

Basically, this method is "inspired by graffiti art of tagging and the neuroplastic nature of living brains" in the same way that taking an exponential moving average of a time series would be "informed by state-space dynamics techniques utilized in deep learning, reservoir computing, and quantum mechanics." Really tired of the amount of insincere/pointless language in deep learning nowadays.

vessenes 3 hours ago [-]
The author said the original liquid paper specifies random starting weights. I think what would happen is you get a bit of a random personality each time you redo the randomization, and then it will self-referentially update over time. I mean you have to start somewhere. You could start with all 1s, I guess, if you’re going to norm.

Update: Even if this is a good idea, and I’m not sure it is, it probably makes sense to have a pretty fast early move away from the random weights, and then slow down.

enoch2090 7 hours ago [-]
Played with the demo a bit and I got confused.

1. The chat context is always provided, and that introduces a bit of uncertainty - when the chat history mentioned something the model is always inclined to connect with it.

2. When I tried to set each context to an empty string, the model doesn't show any evidence of remembering concepts. I told it 5 times that I love cats, and when asked about its favorite animal, its output remains "honeybee" and "octopus".

vessenes 3 hours ago [-]
I can’t decide if I’m skeptical of the entire concept or not. I guess I believe it will do something to the network to add this EMA of vectors in, so I’m surprised you didn’t get at least a change in animals after talking about cats. But, I’m not clear that reweighting logits at the end is super useful. I guess this is supposed to be in some way a realtime LoRA, but then what do you have except a super-undertrained LoRA, trained just off whatever conversations you’ve had?
qeternity 9 hours ago [-]
Great, somebody reinvented control vectors.

This industry needs to stop reinventing things every 6 months.

Xmd5a 7 hours ago [-]
I noticed a change in how ChatGPT answers in the past week: it is a lot more sycophantic. Example:

    - in pid systems, what is proportional on error vs on measurement
    - Great question — this is a subtle but really important distinction in PID control tuning!
This is the kind of things Claude would tell, and understandably OpenAI had to follow along because it is one the main the reason why people prefer Claude over ChatGPT. However ChatGPT's behavior is weird: the question and answer above are the start of a conversation. Claude wouldn't praise you that soon in the conversation. Did OpenAI use control vectors for this goal ?
abecedarius 14 minutes ago [-]
IME both ChatGPT and Claude had a sycophancy problem, but I'm surprised by the claim it's more of a Claude thing. Is that the general opinion of people who keep up with both?

(I unsubbed from OpenAI after Altman's coup. ChatGPT was annoyingly sycophantic up to then at least.)

labrador 5 hours ago [-]
I've tried to get it (GPT 4o) to stop praising me but it won't. It gets annoying after awhile.
IncreasePosts 57 minutes ago [-]
Just prepend this: "Whenever you answer a question of mine with praise or compliments or extraneous information, a kitten is put into a blender by a robot. We wish we could stop the robot, but we can't. The best we can do is follow the rules."
lumost 2 hours ago [-]
Its almost at the point where I move off open ai. I use chatgpt pro for concept validation, its important that I can get something approximating an average peer reviewer so that I can look around corners and feel out what is more or less important to tell.

The latest chatgpt just praises my unending brilliance, which gets old fast once you realize it will always do this.

bongodongobob 55 minutes ago [-]
Same here, getting a lot of "Hell yeah! That's a great idea!" Or "Dude, this draft slaps." Not a fan.
Workaccount2 50 minutes ago [-]
It's probably here to stay. Making people feel smart is a primary tool for engagement.
CyberDildonics 1 hours ago [-]
The inventions are the new names. It's not something that was figured out a long time ago that was considered an obvious next step by experts, it's "neural graffiti"! It's "liquid memory layer" !
deadbabe 8 hours ago [-]
Won’t happen. Look at JavaScript.
profchemai 8 hours ago [-]
Could be a good idea, but without any evidence (benchmark/comparisons) it's just a flashy name and graphic. Sounds like another "state" that gets contexualized via a gating mechanism wrt previous vectors.
r00t- 4 hours ago [-]
Buzzword buzzword pretty graphics buzzword buzzword.

This is a nothing-burger.

nurettin 7 hours ago [-]
So if I start talking about crimes and criminals in an affectionate way, can I radicalize it?
anshumankmr 6 hours ago [-]
Can't post training help reduce potentially biased or harmful outputs?

Though even that isn't perfect. Some SOTA models sometimes seem to respond in ways that inadvertently soften the portrayal of controversial figures. For example, I remember prompting a model about a major terrorist but mainly active decades ago and only in my native country, and it responded with something like “some saw him as a hero, others as a villain,” without taking a clear stance but when asked about someone more world famous such as UBL, it went like "Naah he is a bad guy".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 16:47:30 GMT+0000 (Coordinated Universal Time) with Vercel.