Yes, and it's also that switches got cheaper so most new installations only connect two nodes with a single physical conductor.
ChaoticNeutralCzech
My point is, if you have a shared medium anyway, you can get rid of the MAU by having nodes manage the (virtual) token themselves, basically take limited-time turns based in some order like ascending MAC addresses. You could then wire the cable in any way you want with unlimited junctions, taps, whatever as long as you created a graph where all nodes are connected to each other. The entire point of a token ring is to manage a shared medium (that is, a single pair of wires, either UTP or coax, which can efficiently be wired along the shortest, possibly branching path) because if you have to use a direct connection from every endpoint to an MAU in a star topology, you could just have an Ethernet switch anyway.
Yeah, see the pic in the thread. The "switch" (MAU, Media Access Unit) seems redundant to me though, based on what I read I would expect the network interface cards to create a functional ring on their own over a shared medium. Maybe the old cards for ring-topology networks only worked in that one mode and the MAU made them compatible by pretending they were part of a physical ring, cutting computers out of it if they turned off.
I don't know about you but I prefer to guess age by the face.
That's what I thought too unless the pic (left) literally is how cables are arranged??
My understanding was a shared medium (say, all computers in parallel on a single UTP), where they pass a virtual token "packet" that assigns the right to transmit while anyone receives if addressed, like a ball between kindergarteners sitting in a circle.
The pictured ring topology (left) makes it seem like everyone can only talk to a computer one over, which seems awful for efficiency and resilience, while the pictured star topology (right) introduces an authority figure (MAU is like a kindergarten teacher that decides who walks around and gives the ball to whichever child they think should speak next). Both seem inherently worse than Ethernet - left can be completely broken by disabling one or two nodes while the right one is just a switched network with less throughput.
I think token ring is a data link layer technology that controls transmission access over the physical connection. Like early non-switched Ethernet, computers are connected in parallel to the same wires but instead of collision detection and random delays, which caused congestion and serious overhead on busy networks, a "token" is passed around and determines the right to "speak". Everyone listens at the same time and starts receiving packets when addressed. If the computers were literally wired in series like a looping daisy chain, the failure of one would destroy message propagation. Instead, if the token-bearing computer or disconnects from a token ring network, the token is presumed expired after a short while and a new token-bearer is chosen. It's like a kindergarten activity where you sit around in a circle and need to hold the ball to speak, passing it around. It doesn't matter who you're addressing, you can even broadcast, but that's handled by a higher-level protocol.
As for memos, I have never used them and they seem extremely inefficient.
Edit: looks like Token Ring is actually more physical than I thought, with special cables connecting computers in series, so you may be right. That sounds really stupid as a thing to build a network on, it's easy to cut it in half by disabling just two computers, antithetical to the internet's resiliency principle.
Edit edit: my original understanding was right, the literal cable ring is obsolete for good reason. I still don't get the role of a MAU in the star topology unless it's just needed for old NICs to understand virtual tokens.
Not in my country. But my point still stands as long as there is religious significance to the ritual for some.
I mean, Jewish boys go through a ritual to mark them as part of the religion and christening occurs early too, so I would say that religious people usually assume the baby's religion.
Weird Al actually licenses the songs for parodying because free use is thin ice.
I know that similar computational problems use indexing and vector-space representation but how would you build an index of TiBs of almost-random data that makes it faster to find the strictly closest match of an arbitrarily long sequence? I can think of some heuristics, such as bitmapping every occurrence of any 8-pair sequence across each kibibit in the list. A query search would then add the bitmaps of all 8-pair sequences within the query including ones with up to 2 errors, and using the resulting map to find "hotspots" to be checked with brute force. This will decrease the computation and storage access per query but drastically increase the storage size, which is already hard to manage.
However, efficient fuzzy string matching in giant datasets is an interesting problem that computer scientists must have encountered before. Can you find a good paper that works well with random, non-delimited data instead of just using the approach of word-based indices for human languages like Lucene and OpenFTS?
*Maimais