Because 2^128 is too big to be reasonably filled even if you give a ip address to every grain of sand. 64 bits is good enough for network routing and 64 bits for the host to auto configure an ip address is a bonus feature. The reason why 64 bits is because it large enough for no collisions with picking a ephemeral random number or and it can fit your 48 bit mac address if you want a consistent number.
With a fixed size host identifier compared to a variable size ipv4 host identifier network renumbering becomes easier. If you separate out the host part of the ip address a network operator can change ip ranges by simply replacing the top 64 bits with prefix translation and other computers can still be routed to with the unique bottom 64 bits in the new ip network.
This is what you do if you start with a clean sheet and design a protocol where you don't need to put address scarcity as the first priority.
If you software has no bugs then unikernels are a straight upgrade. If your software has bugs then the blast area for issues is now much larger. When was the last time you needed a kernel debugger for a misbehaving application?
With a standard windows server license you are only allowed to have a two hyperv virtual machines but unlimited "windows containers". The design is similar to Linux with namespaces bolted onto the main kernel so they don't provide any better security guaranies than Linux namespaces.
Very useful if you are packaging trusted software don't want to upgrade your windows server license.
There have been countless articles claiming the demise and failure of the F35 but that is just one side of the story. There has been an argument started 50 years ago in the 1970's about how to build the best next generation fighter jets. One of these camps was called the "Fighter mafia"[0] figure headed by John Boyd. The main argument they bing was the only thing that matters for a jet fighter is how well it performs in one-on-one short ranged dog fighting. They claim that stealth, beyond visual range missiles, electronic warfare and sensors/datalink systems are useless junk that only hinders the dog fighting capability and bloat the cost of new jets.
The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight.
In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX
Yeah unfortunately no amount of manoeuvering is a substitute for a kill chain where a distributed web of sensors and relays and weapon carriers can result in an AAM being dispatched from any direction at lightspeed.
It's that as well, but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.
Meanwhile my description doesn't fully capture how it guarantees unique access for writing, while yours does.
> but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.
You're confusing the borrow checker with RAII.
Dropping the last reference to an object does nothing (and even the exclusive &mut is not an "owning" reference). Dropping the object itself is what automatically frees it. See also Box::leak.
The reason for detecting the orientation of the connector is for higher speed communication. USB-C 20gbps uses both sets of pins on the connector to shotgun two usb3.2 10gbps to get 20gbps. That is why the technical spec name for 20gbps is "USB 3.2 gen 2x2". That is what the "x2" means.
Knowing that USB has this feature is follows that USB-C needs to be self orienting in case both ends of the connector plugged in different orientations.
You say Ethernet got this part right, well it got this part right by not having a reversible connector. Ethernet has 4 tx/rx pair and USB-C has 2 rx/tx pairs per usb 3 connection with 4 in total for 20gbps. The difference is reversibility. Is it worth the tradeoff?
That might work for Ethernet, but how would you do that for any unidirectional USB-C alternate mode without protocol-level feedback such as analog audio or DisplayPort video?
If you want to allow all of
- Reversible connectors
- Passive (except for marking), and as such relatively cheap, adapters and cables
- Not wasting 50% of all pins on a completely symmetric design connected together in the cable or socket
there's no way around having an asymmetrical cable design that lets the host know which signal to output on which pins.
That’s basically how USB-C does it too (except that the chip isn’t strictly necessary; an identifying resistor does the job for legacy adapters and cables).
One misconception that everyone keeps repeating is that the pi 5 expects and needs a 5v/5a power supply to work. The CPU and all the IO will work as expected with any USB pd charger that can do at least 15 watts. The only issue you will have is a power limit on USB peripherals that use a lot of power like hard drives. Keyboards, mice and webcams will work just fine with the 600 milliamp power limit.
Previous raspberry pis had low usb power limits and people did not consider those products dead on arrival. Now that they are trying to address a limitation in the original product people are discovering that the raspberry pi was always a very limited platform to begin and the next step is not an incremental bump to the specs but to just buy a regular computer.
Except as soon as you have some issue first comment will be "are you using official power supply"? I hate such comments with passion. Feels very corporatish support.
Quick summary of the technology is that there is two software parts for virtualization, the hypervisor and the virtual machine monitor.
First is the hypervisor that uses the hardware virtualization features of your cpu to emulate hardware interrupts and virtual memory paging. This part is usually buint into the operating system kernel and one will be prefered per operating system. Common ones are Hyper-V on Windows, Virtualization.Framework on Mac and KVM on Linux
With the kernel handling the low level virtualization you need a Virtual Machine Monitor to handle the higher level details. The VMM will manage what vm image mounted and how the packets in and out of the vm are routed. Some example of VMMs are QEMU, VirtualBox and libVirt.
Flint, the app being shown is a vibe coded web app wrapper around libVirt. On the bright side this app should be safe to use but it also does not do much beyond launching pre made virtual machines. As a developer the work you need to do is provide an Linux distribution (Ubuntu, etc), a container manager (Kubernetes, Docker) and launch your own containers or pre made ones from the internet (Dev Containers).
By that metric even VMware's vSphere with its abominable excuses for APIs also count as elastic.
If you have to manage the hardware yourself, have to plan and pay for upfront for the maximum capacity you would need, and there are fixed limits you can hit and have to plan around yourself, it's not elastic.
With a fixed size host identifier compared to a variable size ipv4 host identifier network renumbering becomes easier. If you separate out the host part of the ip address a network operator can change ip ranges by simply replacing the top 64 bits with prefix translation and other computers can still be routed to with the unique bottom 64 bits in the new ip network.
This is what you do if you start with a clean sheet and design a protocol where you don't need to put address scarcity as the first priority.
reply