Skip to content

Perplexity visualizer

Published:

I read this post by Simon Willi­son about sim­ple sin­gle page apps he built using Claude Ar­ti­facts and I also wanted to try run­ning a (small) LLM in the browser. The re­sult is Per­plex­ity Vi­su­al­izer.

Be warned: open­ing the link will down­load the model (over 100 MB)

It uses the small­est (quan­tized) ver­sion of GPT-2 down­loaded from Hug­ging Face and the trans­form­ers.js li­brary to run the model en­tirely on-​device.

Screenshot of perplexity visualizer

The idea is pretty sim­ple: you type a text and get a per­plex­ity mea­sure plus prob­a­bil­i­ties for each token. It turns out Claude couldn’t few-​shot this as an ar­ti­fact, but it was very help­ful nonethe­less.

Per­plex­ity is a mea­sure of how well the model pre­dicts the next token, on av­er­age. It can be cal­cu­lated as fol­lows:

P=exp(1Ni=1Nlogp(xix<i))\mathcal{P} = \exp \left( - \frac{1}{N} \sum_{i=1}^{N} \log p(x_i | x_{< i}) \right)

It’s the ex­po­nen­ti­ated av­er­age neg­a­tive log-​likelihood for all to­kens in the se­quence. Each prob­a­bil­ity pp is ob­tained from the model out­puts given all pre­vi­ous to­kens in the se­quence.

Here, the max­i­mum con­text is 16 to­kens to keep per­for­mance ac­cept­able. It’s in­ter­est­ing to try dif­fer­ent se­quences and see how cer­tain to­kens are very un­ex­pected, or how one token makes the next few ones very pre­dictable.

Here is the out­put using the text of this post:

Screenshot of perplexity visualizer



Previous Notes
New llm-docsmith release
Next Notes
Hamming and the future of AI