Love this. I don’t know much about risc-v but I’d love to see it disrupt the market a bit.
Sadly this is just a dev kit. It has soldered memory and only works with emmc storage
RISC-V still has a ways to go before it usable for much.
Its usable for much now… Just not as a daily driver laptop. It is good for embedded applications now, but not quire there for phone or laptop use. Maybe one day.
Google is certainly planning on it being viable.
They’ve been merging RISC-V support in Android and have documented the minimum extensions over the base ISA that must be implemented for Android certification
I was hoping to use it for a NAS (just storage and retrieval), but board selection was limited and I wasn’t ready to gamble on something like a USB-C enclosure. It would theoretically be a great fit, hopefully it gets there soon.
This board has the StarFive JH7110 SoC. That processor has previously been in very low power single board computers like StarFive VisionFive 2 (2022) and Milk-V Mars (2023), a Raspberry Pi clone that can be bought for as low as $40. Its storage limitations (SD/eMMC rather than NVMe) show how much this isn’t meant for laptop use.
Very underpowered for a laptop too, even when considering this is intended for developers and doesn’t need to be remotely performance competitive. Consider that this has just 4 RV64GC cores, the cheapest Intel board options Framework offers are 12 cores (4P+8E), and any modern RISC-V core is far simpler with less area than even an Intel E core. These cores also lack the RISC-V vector instructions extension.
Pine64 also has the Star64 as will, in 4GB and 8GB for $70 and $90 respectively. They’re not exactly hard to find.
If I was developing for RISC-V, I’d buy one of those SBCs, not a Framework laptop. But it’s cool that it exists, I suppose.
You don’t need a laptop to use a framework mainboard, they run without battery and display and everything. So if you have a Framework 13 or are in the market for one this might actually be a very nice thing, especially if the price is comparable to other boards.
I guess? But why would you swap to RISC-V from their x86 boards? It’ll be slower and less compatible.
I can see it for devs, but they’re going to want a separate laptop or an SBC, they’re not going to be swapping mainboards on the regular.
I’m considering it as a second laptop option, but I have a particular niche use case: I’m a developer who writes developer tools and is currently trying to ensure we have first-class RISC-V support.
This is probably what I’ll go for if I buy in the next month though: https://liliputing.com/dc-roma-laptop-ii-packs-an-octa-core-risc-v-processor-16gb-of-ram-and-ubuntu-linux/
Hooking up a BananaPi to a keyboard+monitor is going to be quite a bit cheaper, and unlike with the framework laptop you can’t re-use case, monitor, etc. with an upgraded board.
It would, but I already have several dev boards I use in that configuration. What I’m looking for now is something I can take with me to use as a semi-daily driver so I can start reporting bugs in real world use cases.
You can develop using it as an SBC, then put it into the laptop when you go to a conference to present your stuff. Or if you really want to code in the park it’s not like it’d be a microcontroller, it is fast enough to run an editor and compiler.
But granted it’s a hassle to switch out the mainboard. OTOH you can also use the x86 board as an SBC so when you’re at home it doesn’t really matter which board happens to be inside.
I guess from framework’s POV there’s not much of an argument, it’s less “do people want potato laptops” but “do we want to get our feet wet with RISC-V and the SBC market”. Nobody actually needs to use it in a laptop for the whole thing to make sense to them.
Indeed I bought a Banana Pi BPI-F3 with SpacemiT K1 8 core RISC-V chip,4G RAM and 16G eMMC https://www.banana-pi.org/en/banana-pi-sbcs/175.html for €95.89 including delivery. The form factor is nice though and I do enjoy Framework mission and partnerships. Depends what people need it for, good to have more options than aren’t “just” SBC/devboards. I won’t buy one now but I’ll definitely keep it in mind.
Yup, I’m happy that it exists, I’m just not personality interested.
I bought a Milk-V Mars (4GB version) last year. Pi-like form factor and price seemed like an easy pick for dipping my toes into RISC-V development, and I paid US$49 plus shipping at the time. There’s an 8GB version too but that was out of stock when I ordered.
If I wanted to spend more I’d personally prefer to put that budget toward a higher core system (for faster compile times) before any laptop parts, as either HDMI+USB or VNC would be plenty sufficient even if I did need to work on GUI things.
Other RISC-V laptops already are cheaper and with higher performance than this would be with Framework’s shell+screen+battery, so I’m not sure what need this fills. If you intend to use the board in an alternate case without laptop parts you might as well buy an SBC instead.
deleted by creator
This board also has soldered memory and uses MicroSD cards and eMMC for storage, both of which are limitations of the processor.
Ah, yeah, hard no from me dog. Can we get one of the new Snapdragons tho? Please?
Qualcomm and Broadcom are the two biggest reasons you don’t own your devices any more. That is the last option anyone that cares about ownership should care about. You should expect an orphaned kernel just like all their other mobile garbage. Qualcomm is like the Satan of hardware manufacturers. The world would be a much better place if Qualcomm and Broadcom were not in it at all.
What did they do ? I thought all processor are following standards hence I am running Linux on my Intel or AMD CPU.
All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.
For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.
This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.
The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.
That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.
Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.
But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.
When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.
This is not written by ChatGPT right?
Edit: ok don’t kill me, it was so long :/
Doubt it, after reading it myself it is nowhere as calculated and artificial as ChatGPT output
It is a pretty good read though.
I doubt it, there are some grammar mistakes in there I think. At least, it doesn’t look like the typical ChatGPT writing style.
The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.
The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.
I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly
I don’t fault your skepticism.
Not the case with ARM processors sadly, IMO they’re a bit of a mess from that perspective. Proprietary blobs for hardware, unusual kernel hacks for some devices, and no device tree support so you can’t just boot any image on any device. I think Windows for ARM encouraged some standardization in that regard, but for the most part looking at Android devices it’s still very much the wild west.
This is one of the many reasons why Raspberry Pi ARM boards remain popular for the time being, despite there being so many other cheap alternatives available: they actually keep supporting their old boards & ensure hardware on their boards works from the get-go.
There are also some rare cases where Raspberry Pi rewrite open source implementations of Broadcom’s proprietary blob drivers, in one instance for the built in CSI (optional camera)
Wasn’t there a bounty out like 10 years ago for writing an open source alternative to the video drivers? I remember reading about that.
Essentially no processors follow a standard. There are some that have become a de facto standard and had both backwards compatibility and clones produced like x86. But it is certainly not an open standard, and many lawsuits have been filed to limit the ability of other companies to produce compatible replacement chips.
RISC-V is an attempt to make an open instruction set that any manufacturer can make a compatible chip for, and any software developer can code for.
They make a bunch of the other chips that go into computer devices, and from what I understand it’s binary blob or nothing for a lot of it?
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.
If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.
Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.
How’s this possible with the kernel under gpl? If you’re getting precompiled binaries, shouldn’t you also be able to get their sources by law?
Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.
Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.
I thought Mediatek was even more closed off than Qualcomm.
MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.
The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.
On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.
There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.
However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.
Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.
Usually you can get the kernel source for Qualcomm at least, MediaTek tho…
This is a dev kit. This is not for normal people to use. RISC-V is not there yet, but this is a good first step.
At the point you want to upgrade this chip swapping out the entire SOC including the RAM is likely a better option.
Could someone eli5 risc-v and why the fuss?
Edit: thanks for the replies. Searchingnfurther, this 15 min video is quite well made and told me more than I need to know (for now) https://www.youtube.com/watch?v=Ps0JFsyX2fU
RISC-V (pronounced risk five), is a Free open-source Instruction Set Architecture (ISA). Other well established ISA like x86, amd64 (Intel and AMD) and ARM, are proprietary and therefore, one must pay every expensive licenses to design and build a processor using these architectures. You don’t need to pay a license to build a RISC-V processor, you only need to follow the specifications. That doesn’t mean the CPU design is also free, no, they stay very much the closed property of the designer, but RISC-V represents non the less, a very big step towards more transparency and technology freedom.
I pity the five year old who has to read this.
I’m a grown up though so thank you for the explanation.
Costs less
Yes, I admit it’s still a pretty complex explanation. I gave it my best shot :)
Isn’t it possible to add custom instructions and locking others from them, leading back to the current ARM situation?
I know there are already a number of extensions specified in the specifications, such that Risc-V could be relevant to design the simplest of microcontroller up to the most powerful super computer. I suppose it is possible and allowed to design a CPU with proprietary extensions. What should prevent an ARM type of situation is the fact that so many use-cases are already covered by the open specifications. What is not there yet, to my knowledge, are things like graphics, video, neural-net acceleration.
graphics, video, neural-net acceleration.
All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.
Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.
VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.
Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.
The instruction set is a tiny part of the overall CPU architecture. You don’t need to lock it as everything else is proprietary: manufacturing, cores, electric design, etc. Most RISC-V processors today use ARM cores and are subject to ARM licensing.
terrible chat GPT ass explanation, sorry.
RISC-V is like LEGO, where you can put together pieces to make whatever you want. Nobody can tell you what you can or can’t make, you can be as creative as you want. Oh, and there’s motors and stuff too.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own. You can get a bit creative making tracks, but that’s about it.
AMD and Intel are like RC cars, they’re really fun, but they use a lot of batteries and you can’t really customize them. Oh, and they’re expensive, so you only get one.
Each is cool, but with LEGO, you can do everything the others do, and more. Like LEGO, RISC-V can be slow to work with, especially if you don’t have the pieces you want, but the more people that use it, the better it’ll get and the more pieces you can get. And if you have a 3D printer, you can make your own pieces and share them with others.
Right now it’s more like megablocks
“you” as in person with required skills, resources and access to a chip fabrication facility. For many others they can just buy something designed and produced by others, or play around a bit on FPGAs.
We will also see how much variation with RISC-V will actually happen, because if every processor is a unique piece of engineering, it is really hard to write software, that works on every one.
Even with ARM there are arguable too many designs out there, which currently take a lot of effort to integrate.
Sure, and there are more people with that access than just AMD, ARM, NVIDIA, and Intel.
If game devs supported RISC-V, Valve could’ve made the Steam Deck without having to get AMD’s help, which means they would’ve had more options to keep prices down while meeting their performance goals. Likewise for server vendors, phone manufacturers, etc, who currently need to buy from ARM (and fab themselves) or AMD/Intel.
And that’s why I mentioned 3D printing. Making custom 3D models of LEGO pieces is out of reach for many (most?) and even owning a 3D printer is out of reach for many. I have one, but I’ve only built a handful of things because it’s time consuming.
As it gets more software support, we should see a lot more variety in RISC-V chips. We’re not there yet, but we should be excited because it’s starting to get traction, and the future looks bright.
It also means that anyone can make their own instruction set extensions or just some custom modifications, which would make software much more difficult to port. You would have to patch your compiler for every individual chip, if you even figure out what those instructions are, and what they do. Backwards, forwards or sideway (to other cpus from other vendors) compatibility takes effort, and not everyone will try to have that, and instead add their own individual secret sauce to their instruction set.
IMO, I am excited about RISC-V, but if the license doesn’t force adopters to open their designs under an open source license as well, I do expect even more portability issues as we already have with ARM socs.
Compilers basically already do that, and distributed executables usually assume minimal instruction support. Compilers can detect what’s supported, so it’s largely a solved problem, at least if you compile things yourself.
And if you have a 3D printer, you can make your own pieces and share them with others.
I really wish that an affordable desktop chip fab was a thing. Maybe with graphene semiconductors it could be feasible.
It’s affordable today, but only for big orders in the millions (e.g. someone like Valve is big enough).
It would be super cool if small batches (hundreds) were feasible, but I don’t think there’s much demand there since that’s where FPGAs come in.
This is a great answer.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own.
That’s not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.
Not an eli5 because I’m still not caught up on it but if my memory serves, RISC-V is an open source architecture for processors, basically like amd64 or arm64, actually I’m pretty sure ARM’s chips are RISC derivatives.
Edit: correcting my comment, ARM makes RISC chips, not RISC-V
ARM and RISC-V are entirely different in that neither one is based on the other, but what they have in common is that they’re both RISC (Reduced Instruction Set Computing) architectures. RISC is what makes ARM CPUs (in your phone, etc) so efficient and hopefully RISC-V will get there too.
x86 by comparison is Complex Instruction Set Computing, which allows for more performance in some cases, but isn’t as efficient.
The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn’t really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it’s not due to RISC vs CISC.
As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.
In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can’t or don’t. CISC-V would be just as exciting.
have variable width instructions,
compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.
vector instructions,
RISC-V is (as far as I’m aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn’t vector instructions, most crucially with vector insns the ISA doesn’t care about vector length on an opcode level. That’s like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it’s using just as wide registers as SSE3.
But you’re right the old definitions are a bit wonky nowadays, I’d say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don’t really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.
compressed instruction set /= variable-width […]
Oh for sure, but before the days of super-scalars I don’t think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.
For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
If you can simplify the instruction decoding that’s always a benefit - moreso the more cores you have.
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.
You’ll get no disagreement from me on that. Maybe you misunderstood what I meant by “CISC-V would be just as exciting”? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.
The CISC vs RISC thing is dead. Also modern ARM ISAs aren’t even RISC anymore even if that’s what they started out as. People have no idea what’s going on with modern technology.
X86 can actually be quite low power (see LPE cores and Intel Atom). The producers of x86 don’t specialize in that though, unlike a lot of RISC-V and ARM producers. It’s not that it’s impossible, just that it isn’t typically done that way.
So is Reduced Instruction Set like in the old assembly days where you couldn’t do multiplication, as there wasn’t a command for it, so you had to do multiple loops of addition?
Right concept, except you’re off in scale. A MULT instruction would exist in both RISC and CISC processors.
The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.
ARM prominently has an instruction to deal with Javascript. And RISC-V will have those kinds of instructions, too, they’re too useful, saving a massive amount of instructions and cycles and the CPU itself doesn’t really need any logic added, the insn decoder just has to be taught a bit pattern and which microops to emit, the APUs already can do it.
What that instruction will never do in a RISC CPU though is read from memory.
On the flipside, some RISC-V macroops are CISC, fusing memory access and arithmetic. That’s an architecture detail, though, only affecting code to the degree of "if you want to do this stuff, and want it to run faster on some cores, put those instructions in this exact sequence so the core can spot and fuse them).
RISC-V is modular, so multiplication is optional but probably everything will support it.
Nah, the Complex instructions are ridiculously complex and the Reduced ones can still do a lot of stuff.
ARM = Advanced RISC Machine
However, RISC-V is specific type of RISC and ARM is not a derivative of RISC-V but of RISC.
ARM = Advanced RISC Machine
Originally Acorn RISC Machine before that
To clarify for those that might not understand that explanation, RISC is just a type of instruction set, x86 is CISC, but arm and RISC-V are RISC
Yup. In general:
- CISC - complex instruction set - you’ll get really exotic operations, like PMADDWD (multiply numbers, then add 16-bit chunks) or the SSE 4.2 string compare instructions
- RISC - reduced instruction set - instead of an instruction for everything, RISC requires users to combine instructions, and specialialized extensions are fairly rare
Modern CISC CPUs often (usually? Always?) have a RISC design behind the CISC interface, it just translates CISC -> RISC for processing. RISC CPUs tend to have more user-accessible cores, so the user/OS handles sending instructions. CISC can be faster for complex operations since you have fewer round-trips to the CPU, whereas RISC can handle more instructions simultaneously due to more cores, so big, diverse workloads may see better throughput. Basically, it’s the old argument of bandwidth vs latency.
Except modern ARM chips are actually CISC too. Also microcode isn’t strictly RISC either. It’s a lot more complex than you are thinking.
There are some RISC characteristics ARM has kept like load-store architecture and fixed width instructions. However it’s actually more complex in terms of capabilities and instructions than pretty much all earlier CISC systems, as early CISC systems did not have vector units and instructions for example.
Yeah, they’ve gotten a bit bloated, but ARM is still a lot simpler than x86. That’s why ARM is usually higher core count, because they don’t have as many specialized circuits. That’s good for some use cases (servers, low power devices, etc), and generally bad for others (single app uses like gaming and productivity), though Apple is trying to bridge that gap.
But yeah, ARM and x86 are a lot more similar today than they were 10 years ago. There’s still a distinct difference though, but RISC-V is a lot more RISC than ARM.
Arm’s chips are not RISC-V derivatives.
Yup, they’re RISC chips (few instructions), but RISC-V is a separate product line.
It’s not just a separate product line. It’s a different architecture. Not made by the same companies either, so ARM aren’t involved at all. It’s actually a competitor to ARM64.
Exactly. That’s what I meant by “different product line,” like how Honda makes both cars and motorcycles, they may share similar underlying concepts (e.g. combustion engines), but they’re separate things entirely.
And since RISC-V is open source, the discussion about companies is irrelevant. AMD could make RISC-V chips if it wants, and they do make ARM chips. Same company, three different product lines. Intel also makes ARM chips, so the same is true for them.
Since when did AMD make ARM chips? Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition. They are largely used in the same applications (or might be in the future), although some specific use cases work better with one or the other. Much like how cars can use either petrol or diesel, but say a large ship is better to use compression ignition and a motorcycle to use spark ignition.
At least 10 years now, and they’re preparing to make ARM PC chips.
Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition.
I tried to keep it relatively simple. They have different use cases like cars vs motorcycles, and those use cases tend to lead to different focuses. We can compare in multiple ways:
X86 like motorcycle:
- more torque (higher clock speeds, better IPC)
- single or dual rider - fewer, faster cores
- less complicated (less stuff on the SOC), but more intricate (more pipelining)
ARM like motorcycle:
- simpler engine - less pipelining, smaller area, less complex cooling
- simpler accessories - the engine is a SOC, but you can attach a sidecar (coprocessor) or trailer, but your options are pretty limited (unlike x86 where a lot of stuff is still outside the CPU, but that’s changing)
The engines (microarch) aren’t that different, but they target different types of customers. You could throw a big motorcycle engine into a car, and maybe put a small car engine into a motorcycle, but it’s not going to work as well. So the form factor (ISA) is the main difference here.
But yeah, diesel vs gasoline is also a descent example, but that kind of begs the question as to where RISC-V fits in (in my example, it would be a diy engine kit, where it can scale from motorcycles to cars to trucks to ships, if you pick the right pieces).
The current CPU architecture we use for desktops/laptops are old, complex and a bit creaky. RISC-V is a modern, fast and efficient architecture, which is free and open. Expect hugely lower battery consumption and for the computer to just “feel” faster, even if the specs are equivelent to other CPUs. The fact that it’s free also make it easier and cheaper to work with. However, software support is going to be absolutely shockingly abysmal and probably will be for at least 5-10 years, until there’s some buy in from e.g. Microsoft or someone who can make a decent translation layer.
If you want to experience the benefits of RISC nowadays in a desktop or laptop pretty much the only option is Microsoft Surface or Apple Silicon macs, which are ARM64 - this isn’t RISC-V, but they are RISC - and many of the benefits apply to them. Apple’s software support is lightyears ahead of Microsoft though, so I’d strongly suggest avoiding the ARM64 surface, at least for the time being.
When the first person opens their new laptop:
“RISC architecture is going to change everything”
Slow down there, Zerocool
HACK THE PLANET ✊
Putting on my rollerblades now.
That movie was ahead of its time in so many ways
Removed by mod
That makes sense. Management types are usually pretty RISC-averse
I have to admit you made me chuckle
As if managers even know what RISC-V is
They prefer RISC-0, and MONEY-5
I mean… I do, too…
I would say that IBM is a rather large company and I’m pretty sure they’ve been producing RISCs for like 30+ years.
Now imagine we only had Windows and no one would create such thing because Windows and it’s programs does not have support.
Great, I’d be glad if they would consider shipping to more countries as well with localized keyboards
I mean, they at least offer a blank + clear ANSI and blank + clear ISO keyboard options along side their 14 other keyboard formats.
Yes that’s amazing — but a blank keyboard is not for everyone.
Moreover, even if I try to cope with this setup, I still cannot receive the laptop and I’d have to use a power adapter
It’s just usb-c power right?
Yes, and so what?
As surprising as it may seem, some might still want to use the supplied charger because they don’t have spare ones powerful enough for the laptop.
I have a Macbook with Magsafe and 6 USB-C phone / small devices chargers. None of them could power a Frame.work so I cannot just use another charger because it’s usb-c
Just buy one if you need. I don’t understand people who prefer forced bundles over deciding what to buy.
Unless you think an included charger is free. It’s not, it’s factored in the price.
Unless you think an included charger is free
Spot on, last time I bought a laptop it came with a charger, so that’s why I was referring to this and why I was concerned about its compatibility with my power plugs.
As I was still unable to order a frame.work yet I wasn’t aware that frame.work didn’t include by default a charger, so your point makes perfect sense.
In this case then I’ll probably end up buying a charger — because none of them in my possession is able to cope with the watts required.
arm first stood for acorn RISC machine
one of the world’s first RISC-V laptops
RISC-V
💀 i know. do i have to attach my brain to my comments?
Any information on the GPU they are pairing with it?
Does anyone know if it’s possible to use a regular AMD or Nvidia GPU with it?
This is not for someone to daily drive. You’ll probably get better performance duct taping and raspberry pi to Bluetooth keyboard and 7 inch pi display.
haha, that doesn’t answer the question at all. But I appreciate you.
It does actually.
Edit: It’s an article about how a company is going to assist in providing RISC 5 dev boards to framework. It’s not about a consumer ready product with a dedicated GPU.
The processor it’s using is linked in the article: https://www.cnx-software.com/2022/08/29/starfive-jh7110-risc-v-processor-specifications/
It’s a system-on-chip (SoC) design with an embedded GPU, the Imagination BXE-4-32, which appears to be designed mainly for smart TVs and set-top boxes.
The SoC itself only has two PCIe 2.0 lanes on separate interfaces so you can’t use both for the same device, and one is shared with the USB 3.0 interface.
That’s not even enough bandwidth to drive an entry-level notebook GPU from over a decade ago. Seriously: the GeForce GT 520M, launched January 2011, wants a full PCIe 2.0 x16 interface. Same with the Raedeon HD 6330M. You could probably get away with just 8 lanes if you had to, but not only one.
The other commenter wasn’t kidding by saying you could get more power out of a Raspberry Pi 4. It’s even mentioned in the article.
That’s great but they need to fix their hinges first.
They did, almost immediately after it became a known issue.
They already have, and they offer optional ones that take more force to move.
There are 2 updated hinges they sell.
https://frame.work/products/display-hinge-kit?v=FRANFB0001
Unless you’re talking about something else
Nope those were it. And it’s still broken after getting replaced once. It’s unfortunate, I wanted a better laptop, not a desktop in all but name.
Didn’t they in the new models?
Just asking.