Why do people obsess over getting faster hardware rather than refining software to do crazy shit like this?
Do you spend much time on optimization? Do you know much about optimization? Can we talk about it? It interests me. Have you done anything like in the video, even if it was small? Some cool workaround you found on your own?
i was going to post it but i just found this one earlier today
i'm actually finding the stuff the demoscene produces to be really, really fucking awesome. makes me wonder what these guys do for a living. do they all work for intel or some shit?
.NET dev here... I feel like part of the "moar coars" dev. team but there isn't much I can do to be honest. It's not like I write search algorithms and shit, I use blocks that are part of the .NET stack.
I do optimize my SQL queries because Entity Framework a shit when it comes to more complicated queries with joins and shit.
But also, I'm thankful for the whole stack we have at our hands. Some of the software we make within weeks would take fucking years in manhours to make with bare C/C++ etc. etc.
But then again, we are just application developers. When I see the same bloat on OS level (lol Android) I wanna puke.
well, i'm guessing most of the time most library implementations of algorithms are going to be fast to begin with compared to what most people would write. even if its just simple optimization (like concatenating a bunch of classes you don't need into one or two classes)
>Why do people obsess over getting faster hardware rather than refining software to do crazy shit like this?
Fast hardware's cheaper and easier. When that stops being true, we'll learn how to optimize our code again.
No, because algorithms are easy to implement and packing data structures so that the hottest bits are at the front is meaningless, when the horde of retard designers come in and spew all over the place.
History has shown that demosceners are really terrible at actual game development (see Psygnosis and what happened with farbrausch's 96k FPS engine). Still really cool to look at though.
How did they do the last one with the figurine spinning? It can't be a 3D model because it was barely churning out ~100 polys earlier.
And i doubt it's prerendered because that would miss the whole point of a demo.
>especially for video games and the like
The problem is almost always parallel programming and the overhead introduced by the API (which will be reduced immensely with the next iteration of DX, and even Vulcan).
Then what is the point in doing so?
I thought demos are supposed to be real-time.
Pic unrelated. It's some weird,custom board with a bunch of (FPGAs?, not sure). Since it's low frequency, this could work. Probably. Not likely.
>Skiddie ROMcookers know better how to get new Android versions past official support properly working on my phone than the company itself
>No. This has never been the case EVER.
Listen here, retard, bringing support to a phone includes merging the latest commits (4.4.2 -> 5.0), and including the drivers. Problems arise when they need to make certain changes in the kernel, but this barely affects the operating system itself.
My G2 is perfectly fine. Are you on an AOSP based rom?
>when your phone hasn't had official support for over a year
The G2 already got the official lollipop update.
Have you really not realized yet that most companies purposefully don't optimize their software on purpose so that they can update it later and claim a performance improvement pr force people to buy a new faster product with better hardware? Small market share companies like Motorola (Moto X) and LG (G2) don't have the market saturation to throw moar cores in their phones so they compensated by actually making a good software foundation for their devices because they only have one shot. Samsung and HTC have a whole medium and low range lineup so they don't need to spend extra time and money working on how the phone works, their priority is what the phone looks like because they're "trusted brands".
Software programmer here. I've written lots of graphics related stuff as a hobby, fractals, renderers, game engines, etc. When I first started writing stuff my programs were horrible and I wrote them using only basic, naive knowledge. For example my first mandelbrot renderer drew the mandelbrot set pixel by pixel using a terrible windows draw api and it was slow as fuck. It would draw it literally dot by dot and I would watch the dots appear across the screen in slow mow. Eventually I moved to to OpenGL and directx rendering. Another example of my naivety was in making a glow effect that games typically use for whats known as HDR (high dynamic range) or bloom. I knew from a mathematical standpoint that it was whats called a Gaussian blur, and that to achieve the effect with pixel based images, you take boxes of pixels, add them together and then divide them with a weighted term depending on how far they are away from the center pixel. When I first programmed a Gaussian blur, I used a single shader that would sum and computer the resulting blur using a 9x9 pixel box (called a convolution kernel). It worked even for real-time rendering, however I had problems scaling up to larger sized kernels (11x11, 13x13 (only odd numbers work because there needs to be a center pixel)). I later read actual white papers about how it's done in practice without inventing it myself based on pure mathematics. Basically in practice instead of using one shader you use 2. One for the horizontal calculation and then after that you use the results and do a vertical calculation. The equivalent for the 9x9 box means you do the first calculation with a 1x9 box and then the second one with another 1x9 box. In total number of calculations you have to do is 2x9 rather than 9x9. I have more stories to tell and can post some of the stuff I've done if you're interested.
For end user apps sure. For the majority of the computers in the world no. I live in Ottawa, Canada and 99% of programmers in the tech industry here work for telecom or military applications. Their target platforms are 99% ARM with vxworks running as the OS. Optimization is critical here. Every code submission I make has to have a memory delta attached. Every new task created has to use the minimum amount of stack possible. Yes, if you are writing end user programs say for Windows/OSX/Android, then optimization is dead. For the majority of programmers in the world, no. End user applications is a very small sector of the tech industry. When I was in University applying for Co-op positions we had a job database where companies would post their jobs to for our University. There were a million programming jobs. I never saw one that was for end-user applications.