I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
I don't understand why many say that RPi software/firmware support is 'fantastic'. Maybe it used to be in the beginning compared to other chips and boards, but right now it's a bit above average: they ignore many things which is out of their control/can't debug and fix (as in Wi-Fi chip firmware).
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
"none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be"
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
Yup, I'm working a lot with Jetsons, and having the Orin NX on 22.04 is quite limiting sometimes, even with the most basic things. I got a random USB Wi-Fi dongle for it, and nope! Not supported in kernel 5.15, now have fun figuring out what to do with it.
> The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
If we take a step back, I think this is something to be saddened by. I, too, find boards without proper mainline support to be e-waste, and I am glad that we perhaps aren't producing quite as much of that anymore. But imagine if a good chunk of these boards did indeed have great mainline support. These incredibly cheap devices would be a perfect guarantor of democratized, unstoppable general compute in the face of the forces that many of us fear are rising. Even if that's not a fear you share, they'd make the perfect tinkering environment for children and adults not otherwise exposed to such things.
It takes a few years, but the Broadcom chips in Pis eventually get mainline support for most peripherals, similar to modern Rockchip SoCs.
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
Also, unlike a lot of other manufacturers who only provide builds of Linux for their own hardware for a couple of years, it seems that even the latest version of the official Raspberry Pi OS supports every Raspberry Pi model all the way back to the first one with the 32-bit version of the OS.
Likewise, the 64-bit version of the OS looks like it supports every Raspberry Pi model that has a 64-bit CPU.
More like people try doing anything other than use the base OS, and realize the bottom-tier x86 mini-PCs are 3-4x faster for the same price, and can encode a basic video stream without bogging down.
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
I've used them for mostly dedicated tasks, at least the RPi3 and older. I've used the RPi3 as CUPS servers at a couple of sites, for a few printers. Been running for many years now 24/7 with no issues. As I could buy those SBCs for the original low price and the installation was a total no-brainer, I would never consider using any kind of mini PC for that.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
I built a nice little cyberdeck around an RPi 5 but it's turned out to be very disappointing. I was counting on classic X11's virtual display stuff to enable a 1080x480 screen to be usable with panning (virtual 720p or something, just a cool vertical pan). Problem is, the X11 support sucks, and so there's almost no 2D acceleration, so this simple thing that used to work great on a 486 with an ATI SVGA doesn't work very well at all on a machine a thousand times faster. Wayland has of course no support for a feature like this one, so I'm stuck with a screen too narrow to use, and performance for everything else that's pretty sub-par.
Aah, I had totally forgotten about that X11 feature, I did use it for something very many years ago.
I have only used the default setup (which is presumably Wayland) on the Pi, looks good but I don't actually use display features much.
People do all manner of wacky stuff with Pis that could be more easily done with traditional machines. Kubernetes clusters and emulation boxes are the more common use cases; the former can be done with VMs on a desktop and the latter is easily accomplished via a used SFF machine off of eBay. I've also heard multiple anecdotes of people building Pi clusters to run agentic development workflows in parallel.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
I concur with this. The novelty of the Pi is getting a computer somewhere that you normally wouldn't due to the size and complexity. GPIO is a very nice addition, but it looks like conventional USB to GPIO is a thing so it's not really a huge driver to use a Pi.
> device is largely useless due to a lack of software support.
Came looking for this. It's the pitfall of 99% of hardware projects. They get a great team of hardware engineer, they go through the maddening of actually producing a thing (which is crazy complex) at scale, economically viable (hopefully), logistic hurdles including tax worldwide, tariffs, etc... only to have only people on their team be able to build and run a Hello World example.
To be fair even big player, e.g. NVIDIA, sucks at that too. Sure they have their GPU and CUDA but if you look at the "small" things like Jetson everybody I met told me the same thing, great hardware, unusable because the stack worked once when shipped then wasn't maintained.
Welcome to the world of firmware. That’s why RaspberryPi won and pivoted to B2B compute module sales as they managed to leech broad community support for their chips and then turn around and sell it to industry who were tired of garbage BSPs.
The reality for actual products is even worse. Qualcomm and Broadcom (even before the PE acquisition) are some of the worst companies to work with imaginable. I’ve had situations where we wasted a month tracking down a bug only for our Qualcomm account manager to admit that the bug was in a peripheral and in their errata already but couldn’t share the whole thing with us, among many other horror stories. I’d rather crawl through a mile of broken glass than have to deal with that again, so I have an extreme aversion to using anything but RPi, as distasteful as that is sometimes.
What's Qualcomm and Broadcom moat? Is it "just" IP or could they be replaced by a slower more expensive equivalent, say FPGA based, relying on open building blocks?
I gave up on them and switched to a second hand mini pc. These mini desktops are offloaded in bulk by governments and offices for cheap and have much better specs than the same priced SBC. And you are no longer limited to “raspberry pi” builds of distros.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Boost enabled
WiFi disabled
No changes to P clock states or something from bios
Fedora
Applied all suggestions from powertop.
I don’t recall changing anything else.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
Every time there's a new discussion of some arm board, I compare the price / features / power use with the geekom n100 SBC I picked up awhile back.
As far as I can tell, the OrangePi 6 remains distinctly uncompetitive with SBCs based on low-end intel chips.
- Orange pi consumes much more power (despite being an arm CPU)
- A bit faster on some benchmarks, a bit slower on others
- Intel SBC is about 60% the price, and comes with case + storage
- Intel SBC runs mainline linux and everything has working drivers
Yes, and no. I have an OrangePi 5 Ultra and I'm finally running a vanilla kernel on it.
Don't bother trying anything before kernel 6.18.x -- unless you are willing to stick with their 6.1.x kernel with a million+ line diff.
The u-boot environment that comes with the board is hacked up. eg: It supports an undocumented amount of extlinux.conf ... just enough that whatever Debian writes by default, breaks it. Luckily, the u-boot project does support the board and I was able to flash a newer u-boot to the boot media and then the onboard flash [1].
Now the hdmi port doesn't show anything and I use a couple of serial pins when I need to do anything before it's on-net.
--
I purchased a Rock 5T (also rk3588) and the story is similar ... but upstream support for the board is much worse. Doing a diff between device trees [2] (supplied via custom Debian image vs vanilla kernel) tells me a lot. eg: there are addresses that are different between the two.
Upstream u-boot doesn't have support for the board explicitly.
No display, serial console doesn't work after boot.
I just wanted this board for its dual 2.5Gb ethernet ports but the ports even seem buggy. It might be an issue with my ISP... they seem to think otherwise.
--
Not being able to run a vanilla kernel/u-boot is a deal-breaker for me. If I can't upgrade my kernel to deal with a vulnerability without the company existing/supporting my particular board, I'm not comfortable using it.
IMHO, these boards exist in a space somewhere between the old-embedded world (where just having a working image is enough) and the modern linux world (where one needs to be able to update/apply patches)
Hardware video decoding support for h264 and av1 just landed in 7.0 so it hasn't been a great bleeding edge experience for desktop and Plex etc users. But IMO late support is still support.
Not on this list is the current GPU Vulkan drivers Collabora are working on too. Don't think that's really blame Rockchip since they're ARM Mali-G610 GPUs, but yeah those didn't get stable in Mesa until last year.
On a related note: I pulled my pinebook pro out of a drawer this week, and spent an hour or so trying to figure out why the factory os couldn’t pull updates.
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
What's the feasibility these days of using AI assistanted software maintenance for drivers? Does this somewhat bridge the unsupported gap by doing it yourself or is this not really a valid approach?
The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.
Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
> You can't have a custom kernel if you can't rebuild the device tree.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
It's legacy of IBM PC compatible standard, that has multiple vendors building computers, peripherals that work with each other. Microsoft tried their EEE approach with ACPI that made suspend flaky in linux in early years.
x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)
I've just been doing some reading. The driver situation in Linux is a bit dire.
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
> If the image contains information required to bring up the device, why isn't that data shipped in firmware?
the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.
so, many reasons
- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)
- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)
- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?
- firmware updates in the field add more complexity. these are often low service or automatic service devices
anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.
that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)
I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.
Yeah, I ended up using an old mac mini for my Home Assistant needs. It draws a whopping 7W from the wall at idle (and it's near always idle), but the price of a new RPi is the same as 13k hours of electric usage for this.
Using whatever compute you have sitting in a drawer usually makes the most sense (including an old phone).
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
This seems to be an overkill for most of my workloads that require an SBC.
I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support.
For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
There are a couple outfits making M.2 AI accelerators. Recently I noticed this one: DeepX DX-M1M 25 TOPS (INT8) M.2 module from Radxa[1]: https://radxa.com/products/aicore/dx-m1m
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience?
Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
This resonates. I still have a Pi 3B running pihole and it's been up for years. No updates needed, just works. The newer boards trying to compete with mini PCs feels like a different product category entirely.
Unreliable USB: https://github.com/raspberrypi/linux/issues/3259
Unreliable Wi-Fi:
* https://github.com/raspberrypi/linux/issues/7092
* https://github.com/raspberrypi/linux/issues/7111
* https://github.com/raspberrypi/linux/issues/7272
I don't understand why many say that RPi software/firmware support is 'fantastic'. Maybe it used to be in the beginning compared to other chips and boards, but right now it's a bit above average: they ignore many things which is out of their control/can't debug and fix (as in Wi-Fi chip firmware).
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
If we take a step back, I think this is something to be saddened by. I, too, find boards without proper mainline support to be e-waste, and I am glad that we perhaps aren't producing quite as much of that anymore. But imagine if a good chunk of these boards did indeed have great mainline support. These incredibly cheap devices would be a perfect guarantor of democratized, unstoppable general compute in the face of the forces that many of us fear are rising. Even if that's not a fear you share, they'd make the perfect tinkering environment for children and adults not otherwise exposed to such things.
I thought raspberry pi could basically run a mainline kernel these days -- are there unsupported peripherals besides Broadcom's GPU?
The major difference is Raspberry Pi maintains a parallel fork of Linux and keeps it up to date with LTS and new releases, even updating their Pi OS to later kernels faster than the upstream Debian releases.
Likewise, the 64-bit version of the OS looks like it supports every Raspberry Pi model that has a 64-bit CPU.
https://www.raspberrypi.com/software/operating-systems/
Were people actually doing that?
If the RPI came with any recent mid-tier Snapdragon SOC, it might be interesting. Or if someone made a Linux distro that supports all devices on one of the Snapdragon X Elite laptops, that would be interesting.
Instead, it's more like the equivalent of a cheap desktop with integrated GPU from 20 years ago, on a single board, with decent linux support, and GPIO. So it's either a linux learning toy, or an integrated component within another product, and not much in between.
I have a couple of RPi4 with 8GB and 4GB RAM respectively, these I have been using as kind-of general computers (they're running off SSDs instead of SD cards). I've had no reason so far to replace them with anything Intel/AMD. On the other hand they can't replace my laptop computer - though I wish they could, as I use the laptop computer with an external display and external keyboard 100% of the time, so its form factor is just in the way. But there's way too little RAM on the SBCs. It's bad enough on the laptop computer, with its measly 16GB.
I wouldn't wish it upon an enemy, but it's a thing.
I think in all cases it's the sheer novelty of doing something with a different ISA and form factor. Having built and racked my share of servers I see no reason to build a miniature datacenter in my home but, hey, to each their own.
Came looking for this. It's the pitfall of 99% of hardware projects. They get a great team of hardware engineer, they go through the maddening of actually producing a thing (which is crazy complex) at scale, economically viable (hopefully), logistic hurdles including tax worldwide, tariffs, etc... only to have only people on their team be able to build and run a Hello World example.
To be fair even big player, e.g. NVIDIA, sucks at that too. Sure they have their GPU and CUDA but if you look at the "small" things like Jetson everybody I met told me the same thing, great hardware, unusable because the stack worked once when shipped then wasn't maintained.
The reality for actual products is even worse. Qualcomm and Broadcom (even before the PE acquisition) are some of the worst companies to work with imaginable. I’ve had situations where we wasted a month tracking down a bug only for our Qualcomm account manager to admit that the bug was in a peripheral and in their errata already but couldn’t share the whole thing with us, among many other horror stories. I’d rather crawl through a mile of broken glass than have to deal with that again, so I have an extreme aversion to using anything but RPi, as distasteful as that is sometimes.
There are no ARM NUCs at such prices, and even if there were the GNU/Linux support would be horrible.
Unless you strictly need the tiny form factor of an SBC you are so much better going with x86.
It's 127 x 127 x 508 mm. I think most mini N100 PCs are around that size.
The OrangePi 5 Max board is 89x57mm (it says 1.6mm "thickness" on the spec sheet but I think that is a typo - the ethernet port is more than that)
Add a few mm for a case and it's roughly 2/3 as long and half the width of the A40.
[1] https://manuals.plus/asin/B0DG8P4DGV
1: https://www.ecs.com.tw/en/Product/Mini-PC/LIVA_Q2/
Big by comparison, but still pretty small
Sometimes easier to acquire, but usually the same price or more expensive.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Boost enabled WiFi disabled No changes to P clock states or something from bios Fedora Applied all suggestions from powertop. I don’t recall changing anything else.
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
It has major overheating issues though, the N100 was never meant to be put on such a tiny PCB.
https://www.armbian.com/boards?vendor=xunlong
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
As far as I can tell, the OrangePi 6 remains distinctly uncompetitive with SBCs based on low-end intel chips.
- Orange pi consumes much more power (despite being an arm CPU) - A bit faster on some benchmarks, a bit slower on others - Intel SBC is about 60% the price, and comes with case + storage - Intel SBC runs mainline linux and everything has working drivers
Don't bother trying anything before kernel 6.18.x -- unless you are willing to stick with their 6.1.x kernel with a million+ line diff.
The u-boot environment that comes with the board is hacked up. eg: It supports an undocumented amount of extlinux.conf ... just enough that whatever Debian writes by default, breaks it. Luckily, the u-boot project does support the board and I was able to flash a newer u-boot to the boot media and then the onboard flash [1].
Now the hdmi port doesn't show anything and I use a couple of serial pins when I need to do anything before it's on-net.
--
I purchased a Rock 5T (also rk3588) and the story is similar ... but upstream support for the board is much worse. Doing a diff between device trees [2] (supplied via custom Debian image vs vanilla kernel) tells me a lot. eg: there are addresses that are different between the two.
Upstream u-boot doesn't have support for the board explicitly.
No display, serial console doesn't work after boot.
I just wanted this board for its dual 2.5Gb ethernet ports but the ports even seem buggy. It might be an issue with my ISP... they seem to think otherwise.
--
Not being able to run a vanilla kernel/u-boot is a deal-breaker for me. If I can't upgrade my kernel to deal with a vulnerability without the company existing/supporting my particular board, I'm not comfortable using it.
IMHO, these boards exist in a space somewhere between the old-embedded world (where just having a working image is enough) and the modern linux world (where one needs to be able to update/apply patches)
[1] https://www.reddit.com/r/OrangePI/comments/1l6hnqk/comment/n...
[2] https://gist.github.com/imoverclocked/1354ef79bd24318b885527...
Not on this list is the current GPU Vulkan drivers Collabora are working on too. Don't think that's really blame Rockchip since they're ARM Mali-G610 GPUs, but yeah those didn't get stable in Mesa until last year.
It's pretty hacky for sure but wouldn't classify it as useless. e.g. I managed to get some LLMs to run on the NPU of an Orange pi 5 a while back
I see there is now even a NPU compatible llama.cpp fork though haven't tried it
I guess manjaro just abandoned arm entirely. The options are armbian (probably the pragmatic choice, but fsck systemd), or openbsd (no video acceleration because the drivers are gpl for some dumb reason).
This sort of thing is less likely to happen to rpi, but it’s also getting pretty frustrating at this point.
Er?
https://manjaro.org/products/download/arm explicitly lists the pinebook pro?
Maybe the LLM was wrong and manjaro completely broke the gpg chain (again), but it spent a long time following mirror links, checking timestamps and running internet searches, and I spent over an hour on manual debugging.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
Right?
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.
What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.
ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.
On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.
Seems..... broken
https://github.com/tianocore/edk2-platforms/tree/master/Plat...
https://github.com/edk2-porting/edk2-rk3588
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.
so, many reasons
- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)
- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)
- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?
- firmware updates in the field add more complexity. these are often low service or automatic service devices
anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.
that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)
Proprietary and closed? One can hope.
Using whatever compute you have sitting in a drawer usually makes the most sense (including an old phone).
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
Can also plug in a power bank. https://us.ugreen.com/collections/power-bank?sort_by=price-d...
The advantage is that if the machine breaks or is upgraded, the dock and pb can be retained. Would also distribute the price.
The dock and pb can also be kept away to lower heat to avoid a fan in the housing, ideally.
Better hardware should end up leading to better software - its main problem right now.
This 10-in-1 dock even has an SSD enclosure for $80 https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd (no affiliation) (no drivers required)
I'd have another dock/power/screen combo for traveling and portable use.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
If you're in the business of selling unbundled edge accelerators, you're strongly incentivized to modularize your NPU software stack for arbitrary hosts, which increases the likelihood that it actually works, and for more than one particular kernel.
If I had an embedded AI use case, this is something I'd look at hard.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
There are some perplexity comparison numbers for the previous gen - Orange pi 5 in link below.
Bit of a mixed bag, but doesn't seem catastrophic across the board. Some models are showing minimal perplexity loss at Q8...
https://github.com/invisiofficial/rk-llama.cpp/blob/rknpu2/g...
Is this a thing? I read an article about how due to some implementation detail of GPUs, you don't actually get deterministic outputs even with temp 0.
But I don't understand that, and haven't experimented with it myself.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
``` alias findpi='sudo nmap -sP 192.168.1.0/24 | awk '\''/^Nmap/{ip=$NF}/B8:27:EB|DC:A6:32|E4:5F:01|28:CD:C1/{print ip}'\''' ```
On every `.bashrc` i have.
But I just don't get... everything, I don't get the org, I don't get the users on hn, I'm like skinner in the 'no the kids are wrong' meme.
It's a lambda. It's a cheap, plug in, ssh, forget. And it's bloody wonderful.
If you buy a 1 or 2 off ebay, ok maybe a 3.
After that? Get a damn computer.
Want more bandwidth on the rj45? Get a computer.
Want faster usb? Get a computer.
Want ssd? Get a computer
Want a retro computing device? Get a computer.
Want a computer experience? Etc etc etc, i don't need to labour this.
Want something that will sit there, have ssh and run python scripts for years without a reboot? Spend 20 quid on ebay.
People demanded faster horses. And the raspi org, for some, damn fool, reason, tried to give them.
There are people bemoaning the fact that raspberry pi's aren't able to run LLM's. And will then, without irony, complain that the prices are too high. For the love of God, raspi org, stop listening to dickheads on the Internet. Stop paying youtubers to shill. Stop and focus.
You won't win this game
> On every `.bashrc` i have.
You might want to try mDNS / avahi