Remember all those 400 iq videos about why valve will never deal with the bots. These videos tried to run through every scenario to explain why it won’t happen. Yet it did.
I see a lot of people in the comments saying to not listen to your father but I think your options are to talk to him further or to listen. I say this because your father can most likely see something about the current state of things + you more than you may like to admit.
You should definitely continue computer science if you think you can make it but don’t continue computer science if it’s because you don’t want to listen to your father.
The amount of times I’ve I’ve seen shit like
dsx = sx- ax d1 = dsx ** ax d2 = dsx ** sx
I swear to god. Especially for geometry based functions.
This vaguely describes downfall of any data representation.
I wouldn’t say that the LLM understands in the same way a universe doesn’t understand. A universe contains thing which understand through instantiations of things which are “sentient”. When the LLM makes a fake story of man, does the man in the story “understand”. It seems like the complexities of an internal experience is tied to the man in the story.
This might just be the fact that stories are about people who understand and as a result the artifacts of understanding get captured by the LLM. Meaning it’s able to deep fake understanding.
The deepfaking is what I go with until something convinced me otherwise. But the initial explanation I give is generally how people thinking about the LLm understanding
I would think that modeling LLMs this way is simply implementation. Ideally a categorization for LLM is general enough to be applied to LLMs but also be useful in generating meaningful behavior.
For example,
You want composition to work in LLMs. If you first ask for a answer, then ask to modify the output, you should be able to compose this into a single prompt which yields the same result by definition.
For example, LLM(“a question with a specific answer”) composed with LLM(“deterministically alter the input text”) should output a string which answers the question but is also deterministically altered. This should also imply a single shot LLM(“a question with a specific answer? Deterministically output the text”).
This composition should hold for any arbitrary external composition pattern. So for example, if you have a complex prompting system which makes multiple calls to iteratively process an input, there should be a way to convert that external composition to a single internal composition.
Basically the big picture is that transformers can model arbitrary Turing machines and since external composition is just a Turing machine the LLM should be able to emulate it in the LLM itself.
Exactly, for some reason there’s this rhetoric that the hackers are ex nsa/cia actors or something. Where in reality these people only hack in tf2 because it’s the most accessibly cheater game.
I also made a corresponding theme for discord which utilizes the "Better Discord" plugin. https://github.com/AndarManik/One-Dark-Flat-Theme-for-Discord-BetterDiscord It uses the same hexcolor but in rgb
--backgroundprimary: 40,44,52;
Thanks for catching that.
Here is a repo with the theme. https://github.com/AndarManik/One-Dark-Flat-Theme-for-Obsidian
282C34 This is the hexcode for the flat color, you can probably replace all and change the color.
Its done by the combo of Windows power toys window manager + 20px of tile margin + flat ui themes + background the same color as the windows. Hope that helps
This was to show case the plug-ins ability to output directly to the editor with formatting. In general I don’t really use it for writing code more so for speeding up writing my own documentation with formatting.
For the actual algorithm I first precompute weighted graph, this includes each edge and vertex of the rectangles, i then compute additional edges between vertex if there is an unobstructed path. This represent and path through or around the collection of rectangles.
To compute the path you take the two points and add those as vertexes as well as you add edges for any vertex which is unobstructed from those two points.
You then compute A* with heuristic being distance to the destination.
Obsidian plugin utilizes node runtime so it was fairly easy to hook up obsidian to ChatGPT. The challenging part to this was configuring the streaming so that it is well behaved with the cursor in the editor. For example you can have multiple completions running at the same time as well as edit the document while it is streaming, all of which were far more challenging than integrating the call to ChatGPT.
If your interested in how I got the cursor to be well behaved, I first made a module which contained any instance of an active running completion for any document. These completions run asynchronously and update each others current cursor position, enabling multi completion in a single document.
These asynchronous completions not only update the cursor position for the other completions but also update the users cursor position. This enables editing text after the completion without having the cursor jump back (stay still) relative to the text your writing.
I then added a editor plugin which listens for modifications to the text, these are then communicated to all the completions running on the document and update the current cursor position of the completions.
Yes, using the openAI api key through node.js the app makes the calls to completion.
This plugin is not on the community plugins list yet but you can install it in you vault your self from this github
The theme is based on the One dark pro flat theme in vscode. The custom two custom ChatGPT commands are inline completion and file naming.
Inline complete takes the current text as input and output the appropriate text where the cursor was originally placed.
File naming simply names files which are untitled.
I think it has less to do with the actual winning and losing since the model is never receiving that signal of whether a game won or loss. It’s more like, suppose I have a function f(x) = 3, I can train my model on it, now suppose you craft a piecewise function, g(x) = ( random( 0,1 ) < .8 ) ? 3 : random( -10,10); where random() returns a random value within the interval. Because there is no correlation between noisy mistakes a model trained on either f(x) or g(x) would provide the same output in the low temperature sampling.
This is sort of why it can never bootstrap since it can never learn past pure signal, f(x)
The argument essentially is that chess is a domain where an elo 1000 player make in general elo 1500 moves plus some noise from blunders. Transcendence is the result of denoising a noisy expert.
It’s not conclusive if this can bootstrap itself the same way self play RL can. Based on what’s said it seems like there is a limit on the amount of elo because you can only seniors until it’s just signal.
Could the presumption that the text “some words in quotes” is always that string of characters. It could be that at some point in time it’s no longer that string of characters but some other string, which is a sort of presumption motivated from experience that that should never happen.
Moreover - there can be pricing reasons to why this could be. Twitch.tv is current not profitable because it costs a lot of money to host the live streaming. The same goes for these small channels where statistics have it that the viewers aren’t going to be converted by the ads.
I’m having trouble parsing that variable initialization. Do the {} represent a dict meaning this is like doing {0:1,1:3,2:6} but in JavaScript?
You were freelancing, you can definitely put it in your resume as experience. Just figure out how to talk about in an interview
If only the unsorted elements were in the correct spot sorted elements wouldn’t have to be removed/s
I work in managed cloud and we aren’t allowed to click links. Pretty standard and was pretty clear in our compliance training.
Why is it so hard to click on a hyperlink?
graphic_design