With silicon chips getting down to its physical limits for die sizes what's the next logical step? Optical throughput or stacking dies for muh100core meme shit
graphene is the worst substance in the world for BS hopes.
graphene performs miracles in the lab all day long...then you come back in the morning to find out it has turned back into graphite.
Stacking dies and maybe some other silicon-like materials with lower limits if they have similar processes.
I don't think there's going to be any radical change for quite a while, like over a decade. All of those nano-materials still need a load of research and capital costs for building new chip plants is an obscene amount so Intel isn't going to be enthusiastic to switch over until they've expended all conventional possibilities.
>Atom size transistors.
They won't be transistors at that point. Logic gates can exist on a molecular scale. The issue is stability and production. We need nano machines to make nano machines.
We need machines to make other machines. We always have.
I think plants could be used. Selenium and bioluminescence.
IIRC quantum computing architectures are only well suited to specific operations (IOPS?) and as such they're most viable as coprocessors and well suited to cryptography.
however i think the future of computing could very well be trending towards a load balanced cloud where tasks are assigned to specific systems designed to compute them most efficiently.
>however i think the future of computing could very well be trending towards a load balanced cloud where tasks are assigned to specific systems designed to compute them most efficiently.
I will never embrace this future if that "cloud" is out of my own direct physical control.
What you just said is the idea I've had and wanted to see for quite a long time. The cloud part, unless meant as a collection of parts in a local machine that are addressed as resources, I'm not at all on board with.
Stack more chips on the top of each other (MCM aka 3-D Chip Stacking).
Then there is Germanium :
Architecture improvements, better parallel multiprocessing, improvements in software design that can actually take advantage of all of that parallel processing, iterations for energy efficiency.
That's all I can think of off of the top of my head.
You are missing the point, I think. You cannot fail to have noticed that back in the 1960's, 70's, and 80's, serious computing was done on mainframes that distributed input/output to terminal nodes on a timesharing basis. We are moving back toward something similar because consumer computerisation is breaking computers down into doing simpler tasks. Mobile computing for personal contact and amusement, gaming on consoles. There is very little need for most people to have the power of a desktop computer. Eventually the home will contain thin clients that connect to central servers. The end of the era of large desktop computers is rapidly coming to a close
Not sure if applicabel to stacking but there was talk of microchannels being built into the chip itself to allow for better cooling. Perhaps future chips would have a portion of the heatsink built in with channels penetrating deep enough to alleviate the heat.
The physical size of the CPU is only one of the bottlenecks of the total system. Unified memory and HBM will probably give better performance gains than the last couple of generations of CPU die shrinkage combined. We have an extremely long way to go in terms of software architecture to truly make use of multi core systems and most games are still severely restricted in that respect.
The most interesting technology I've seen as a silicon replacement is optical computing. Optalysis were supposed to be demonstrating a prototype in January but as far as I know they entered into an agreement to provide optical computers for DNA sequencing later on as a practical application as opposed to demonstrating an early prototype.
Nigger it already has. Amazon Cloud services are absolutely dominating the business world.
They said at their conference that they expect most consumer devices to simply be thinclients that streamed and requested resources dynamically from datacenters in the next decade.
Yes, how would multi-layered chips be less power hungry, faster, or cheaper than conventional ships?
Saving space is hardly a reason, high power chips (CPU/GPU/etc) make up a tiny portion of our devices.
Thanks. That is an interesting and viable idea :
>IBM scientists unveiled a powerful and efficient technique to cool 3-D chip stacks with water. In collaboration with the Fraunhofer Institute in Berlin, they demonstrated a prototype that integrates the cooling system into a 3-D chip by piping water directly between each layer in the stack.
Looks a lot like the old Connection Machine super computer's design but shrunk down.
>muh100core meme shit
In all seriousness: we don't know. Most likely more coars. Optical and graphene are just experimental toys at this point. Quatum computing was a big flop.
Eventually we will get to something like 5nm lithography and all hell will break loose. Computers will have to become bigger at that point (bigger than a tablet) to even play 4K resolution games. I can imagine the disgust people will feel when they will be force to use a laptop instead of their pocket smartphone to play crysis 11 on max settings in 2030.
Intel unveils 72-core x86 Knights Landing CPU for exascale supercomputing
MIT's 100-core CPU Will Be Ready This Year
IBM demos 100-GHz graphene transistor
>Looking into indium gallium arsenide as a replacement.
>Chip researchers are trying every; Graphene, gallium nitride, molybdenite, even carbon nanotubes.
>Planned indium gallium arsenide transistor is 22nm in length, which may not sound impressive as Intel already uses a 14nm process to manufacture chips, with 10nm on the way, but silicon is fast approaching a brick wall.
>The smaller transistors get the harder it is for them to handle current efficiently.
>Indium gallium arsenide’s ability to conduct electrons is about five times better.
>The material is already widely used in fiber-optic applications and in radar systems.
>Intel is also looking into new types of packaging: 2.5D, where separate dies are placed side by side on an interposer, and 3D, where each die is stacked directly on top of each other. Both 2.5D and 3D packaging are good for reducing power consumption, with 3D really coming into its own with mobile and wearable devices.
It's from 2010. Since then the permeability might have increased.
This would be like using Bochs because your computer is too slow to run something.
How is massive parallelism meme shit? When you can't make it do one thing faster anymore, what's left but to make it able to do more things at once? This isn't just running 50 programs at once. A lot of programs can multithread well. Also this is literally what GPUs and supercomputers do.
Yes and 640 KB is enough for everyone.
You don't need anything above a Pentium to comfortably do any imaginable task in Windows 3.1.
Current computers handle current software well, but we'll need better ones to run whatever the future brings (which is most likely going to be mostly more load from more layers of lazily programmed frameworks but meh).
Except you niggers are literally retarded.
Quantum computers are specially built for specific optimization problems. They require that they are kept at 0.2 kelvin, in a vacuum, and require enormous power consumption.
They are not built like conventional computers at all, and cannot perform the same tasks.
This nigga is right.