myFavOptimizationMethod
Memewhy care for garbage collection when you can have garbage explosion?
I must suggest this to my boss
We throw them fireworks called frameworks to do so
Somebody still probably should collect the exploded garbage afterwards, it can be dangerous
Truly a dumpster fire
I have a similar story, no missiles though:
SysAdmin coming over to another team behind us:
- "You remember when you told me to configure your pods to reboot on out-of-memory error?"
- "Yep sure."
- "And do you remember you said you will look into it if it gets too frequent?"
- "Yeah."
- "Well, get to it then. It is now rebooting at 110 times per hour." :)
your garbage collection is when your missile explodes, but what about the garbage collection for cleaning up the parts when they hit the ground ?
That, sir, is the enemy’s problem
The scariest thing here is that a missile was running windows.
"WE GOT INCOMING!!! FIRE TO INTERCEPT!!"
"I can't, sir. Windows is installing important updates!"
The missile lands, doesn't explode due to windows update, then explodes 5 hours later.
My first thought was that you can get twice as many missiles if you did things properly, assuming you had as much of everything else as needed, but I really don't know how much memory a missile needs. But considering the Apollo 11 computer needed about 4KB of memory to put man on the moon, I'm sure memory is cheap enough that it doesnt matter much.
I'd imagine that when producing missiles that are in the hundreds of thousands of dollars, adding an additional "16GB module" wouldn't make any difference to the end price lol
I agree but it's not a "16 gb module". It has to be wildly redundant and have dozens of certifications; an additional memory module could very well add non-negligible cost despite a consumer 16 gb module being pennies compared to a 100k+ missile.
One advantage of their strategy is that they never need to worry about memory fragmentation, stalls during memory allocation, etc. since it’s literally just incrementing a counter for allocations. The code I deal with at work isn’t realtime, but there are sections so sensitive to latency that we forbid dynamic memory allocation.
Are you working in stocks by any chance?
Nope, I work on a database.
That's assuming those programmers COULD do it in 4kb memory. Also, I doubt they did with barebone code.
Can't argue that that's the most efficient garbage disposal.
Consider yourself collected 😎
Garbage redistribution, surely…
I mean... Is doubling the memory cheaper than paying the developer time to debug it? Even if it is, that still makes me sad.
Yeah, at least in the gaming industry 🥲
What do you mean your 4090 card can’t run my tic-tac-toe game ?
It’s not the game, you just have the wrong graphics settings in the nvidia control panel
"We tested this at NASA, and it works fine. Upgrade your rig"
Western progressive gaming companies mentioned
I find it to be especially annoying outside of gaming actually. I get why games with modern graphics need at least an $800 or so PC. What I don't get is why Reddit needs 6s to load a page, or why the Twitter website just freezes up and crashes after having it open on my phone in the background for a few hours, or why the Twitch app decides to randomly slow down to 2 FPS before freezing etc. etc.
Because Javascript is trash, and web devs avoid optimization like the plague
I think there's also a certain contentness with the current technology, e.g. I've found HTMX instead of fancy JS frameworks can actually make things much more responsive. JS being a shit language is definitely a part though. Also, often, optimization doesn't help much if your fundamental architecture is flawed.
I mean, this was certainly true in the late 80s. It's what happened to Lotus Symphony and Lotus 1-2-3 3.0. They tried to fit into the existing 640K memory limitation, either by cutting features or serious optimization, but by the time they came out everyone had at least 2 MB of RAM.
Whether it's applicable today I'm not so sure, but I get the sense it isn't entirely false.
This happens with Webdev, when a more powerful cloud instance is much cheaper than spent hours of an engineer with a cost of US$300 per hour to look into optimizing. Hours that could be focused on new features
It is also something every gamer keeps repeating on reddit every time a game gets released at 30FPS on console or the PC port is not fully optimized to gamers standards
There is nothing more permanent than a temporary solution
I mean yeah, it's a joke, and I understand the spirit of it, but I also work in a company where the clients pay millions of dollars for the product license, and need to handle the data for entire countries at a time. And then we get bug reports that the application crashes because it ran out of memory running on a 8Gb machine.
I mean seriously, I'm willing to pay for the damn RAM stick if that would make them leave me alone. Sure, we could try to rewrite the entire application architecture for 1000 to 2000 man/days... but when this could be solved under all realistic scenarios by just adding 16Gb of RAM. Yeah... but no. And I still don't understand why the clients pay multiple millions for a product and then stick it on a 8Gb machine (and this is on premise stuff, no cloud, and even then, I saw the prices for both Azure and AWS, and it still wouldn't justify skimping on that)
Unironically, it often is.
In 2012 I was hired by a company as a DBA to help battle a bunch of developers who claimed our SQL servers were underpowered.
When I arrived we had a server with 96 CPUs, 256gb of memory, SSD caching, with fibre HDD in massive availability raids. We were pushing I/O numbers I'd never seen before.
One of the biggest issues was non-ansi compliant SQL. I went through so many rewrites of SQL code and none of it was hardware related.
Indeed. “We can spent a few dollars a month extra for that extra CPU, or you can pay me $12,000 over the next couple months to optimize it. Or I can use that time to build a new feature that gets us new customers.”
Until you scale too hard and you cloud bill makes you go bankrupt.
Just download more RAM /s
But my Internet is slow. I could only download slow RAM...
Well, I spent a year optimizing code and saying “there is a limit to what we can gain and it is getting harder and harder to gain anything. Buy more hardware” to which the answer was “It is impossible to update the hardware“
It was a 8cpu server with Hess running a huge monolith application PLUS the database for huge computing and db loads.
Long story short they spent big bucks upgrading hardware 2 weeks from go live. Plus a year of 3 devs time.
Now we finally convinced them to split it into 2 servers.
At some point, you got to trust your experts on this.
"it's impossible to update the hardware" is probably the biggest red flag possible outside of the real of embedded systems, if you can't move your software when stuff is working how do you plan to handle anything breaking?
This is, in some regard, simply what software design has approached. We favor compatibility over efficiency, hence Javascript (and its frameworks) that run in a browser, interpreted languages like Python, platforms like Electron, and the existence of VMs and containerization (like Docker).
It's not about being fast, it's about being able to reuse and rely.
I mean, we really, really care about efficiency. But only when non-efficiency becomes a noticeable problem
This is a fair point, but importantly we don't strive for high efficiency, the bar is simply acceptable inefficiency.
My university advisor said Computer Scientists prove it works, Software Engineers make it work well.
faster hardware is the cure for slow code. I have yet to find a cure for bad code ):
Better design?
How dare
don't you dare tell me truths that I don't want to hear
Some problems can’t be solved with more hardware. If the CPU on your SQL Server instance is on fire because a frequently run query suddenly got an extremely bad execution plan (I.e. parameter sniffing), then doubling the CPU will accomplish nothing other than doubling the amount of CPU cores that are on fire. Throw all the hardware you want at it, it won’t solve the problem. Forcing the good execution plan (which takes 5 seconds to do and costs no money) will fix it instantly.
There's no PR that's less than an hour of time. In fact, make that two.
There's no reasonable developer that COSTS (not the same as is paid) less than ~150€/hr.
So any small change is going to cost 150-300€. That's just how it is. Still worth it, but never say "free", always say "low cost". Low cost is kinda believable, free is not.
Who said anything about a pull request?
A PR is the standard unit of software change, hence why I used it.
Why would you need one to force an execution plan in SQL Server? That doesn’t involve any code, you just go in and click a button.
I assumed that things that run a lot of time get run by code. And therefore you'd you know, fix the SQL statement in your code, commit that, make a PR, corral some people to write LET US GET THIS MERGED, etc.
Maybe you should stop assuming then.
Hardware devs: ok let's use this photon printer to optimize the quantum fluctuations, combined with the alpha mammography that should yield a 10% increase in processor speed.
Software devs: did you know there's a library that checks whether a variable equals zero?
It seems so
Yes you thing I gonna touch that shit agian
I have been told many times to "just buy more cpu"
game dev in general
I prefer using hardware faster, but to each his own.
well no. Just reduce the sleep()
time incrementally for each new version.
For reasons, I run and regularly use each of Windows, MacOS, and Linux on different machines, but all relatively recent. The response time in the terminal, even if I'm using the same terminal emulator (kitty) with the same shell (zsh) and config (github.com/Schlueter/zsh-config based on zprezto), is noticably more responsive on Linux. Even more so if I use the suck less terminal.
When browsing the web, the difference is much less noticeable.
Hardware doesn't compensate for shitty code, but the way the Internet is built makes it all irrelevant.
Well... It's been called "The American way" for decades 😁
It is according to Todd Howard.
As computers get faster the code we run gets worse and worse. Case in point: JavaScript.
Gaming in a nutshell.
'Whats optimization? Oh you mean you need better hardware.'
"I'm running the latest overpriced Nvidia TI..." -Some poor fool.
'Skill issue' - Todd Howard
cod bo6 is going to take have the ps5 storage too
I guess this is how Windows (and maybe AAA games) work...
"Yes" says AAA game devs
Doing my object oriented programming exam it was really painfully clear how antagonistic optimization and flexibility are with each other
That really depends on what your problem is, most of the time when I see code where performance is a problem the code it is not flexible...
Obviously
The Microsoft way
just increase the timeout...
Absolutely great for people still using 4th gen i7
Not if the slow bad code has timing issues
Mem leak? Install more memory duh!
Answer correctly and you may have a future at Microsoft
aws invoice
You are ready to be a game developer
Take that, Casey Muratori.
Todd Howard says yes.
May's (or Wirth's) Law https://en.wikipedia.org/wiki/Wirth%27s_law
Reverse Moore's Law...
Every 12-18 months the amount of transistors on a chip doubles.
Also every 12-18 months the efficiency of software halves. (:
I really want to see what the performance is on this XD
Activision be like
Yeah.
That's how windows and every modern IDE keeps going ¯\_(ツ)_/¯
I can relate. I'm not an expert in SQL tuning, but after 10 years in the industry, I know my way around. Many things were running slow when I started working for this big delivery app. The data stack was basically Snowflake.
Since I was a new hire, I wanted to show some initiative and did some testing to speed things up. After all that work, my manager said, "Good job, but if it's slow, just use a larger cluster." 😅
Jep, haha I know its a joke but in all seriousness, its wasteful as fuck. Now companies are greenwashing code with dark theme bs, meanwhile microsoft end of lifes laptops because windows 11 needs yet again better hardware.
That's the mindset of all companies that made everything an Electron app.
OpenAI before wasting the entire planets water supply
This is a huge pet peeve of mine.
Since cloud computing (AWS, GCP, Azure, etc.) the solution to so many issues is to throw more resources at things.
Too slow? More CPUs and GPUs. Out of memory? Throw more RAM at it.
Yes it's valid, but there are times in which you need to buckle down and fix your code ffs. We throw money at problems and then complain that we're spending too much money.
No
What you need is code optimization and X amount of hours to do it, only to conclude that it needs full rewrite and it will take another X amount of hours.
Putting more hardware is the last option when you can't tell anything else to the managers to request for another X amount of hours.
My DevOps brain is angry with this.
No, because if your software is made to be slow as shit it will be slow as shit on any hardware
not if i run it on a 25.7 GHz processor with 252 Gb of ram
They told me "Stop using python, is very slow. C is so much faster" but it took me atleast 2 hours just to figure out what comes after int main() So the moral of the story is this: C is only faster if you know how to code in it
Always has been
Ah yes, java optimization
Java is faster than most languages though?
Lol
The fuck is this lazy meme format?
spotted the old head
https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98125