• 129 Posts
  • 938 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • One way to make this more Pythonic – and less C or POSIX-oriented – is to use the pathlib library for all filesystem operations. For example, while you could open a file in a contextmanager, pathlib makes it really easy to read a file:

    from pathlib import Path
    ...
    
    config = Path("/some/file/here.conf").read_text()
    

    This automatically opens the file (which checks for existence), reads out the entire file as a string (rather than bytes, but there’s a method for that too), and then closes up the file. If any of those steps go awry, you get a Python exception and a backtrace explaining exactly what happened.



  • To many of life’s either-or questions, we often struggle when the answer is: yes. That is to say, two things can hold true at the same time: 1) LLMs can result in job redundancies, and 2) LLMs hallucinate results.

    But if we just stopped the analysis there, we wouldn’t have learned anything. To use this reality to terminate any additional critical thinking is, IMO, wholly inappropriate for solving modern challenges, and so we must look into the exact contours of how true these statements are.

    To wit, LLM-induced job redundancies could come from skills which have been displaced by the things LLMs can do well. For example, typists lost their jobs when businesspeople were expected to operate a typewriter on their own. And when word processing software came into existence for the personal computer, a lot of typewriter companies folded or were consolidated. In the case of LLMs, consider that people do use them to proofread letters for spelling and grammar.

    Technologically, we’ve had spell-check software for a while, but grammar was harder. In turn, an industry appeared somewhere in the late 2000s or early 2010s to develop grammar software. Imagine how the software devs at these companies (eg Grammarly) might be in a precarious situation, if an LLM can do the same work. At least with grammar checking, even the best grammar software still struggles with some of the more esoteric English sentence constructions, so if an LLM isn’t 100% perfect, that’s still acceptable. I can absolutely see the fortunes of grammar software companies suffering due to LLMs, and that means those software devs are indeed threatened by what LLMs can do.

    For the second statement, it is trivial to find examples of LLMs hallucinating, sometimes spectacularly or seemingly ironic (although an LLM would be hard-pressed to simulate the intention of irony, I would think). In some fields, such hallucinations are career-limiting moves for the user, such as if an LLM was used to advise on pharmaceutical dosage, or used to draft a bogus legal appeal and the judge is not amused. This is very much a FAFO situation, where somehow the AI/LLM companies are burdened with none of the risk and all of the upside. It’s like how autonomous driving automotive companies are somehow allowed to do public road tests of their beta-quality designs, but the liability for crashes still befalls the poor sod seated behind the wheel. Thoss companies just keep yapping about how those crashes are all “human error” and “an autonomous car is still safer”.

    But I digress.

    My point is that LLMs have quite a lot of capabilities, and people make a serious mistake when they assume its incompetence in one capacity reflects its competency in another. This is not unlike how humans assess other humans, such as how a record-setting F1 driver would probably be a very good chauffeur for a limousine company. But whereas humans have patterns that suggest they might be good (or bad) at something, LLMs are a creature unlike anything else.

    I personally am not bullish on additional LLM improvements, and think the next big push will require additional academic research, being nowhere near commercialization. But even I have to recognize that some very specific tasks are decent using today’s availabile LLMs. I just don’t think that’s good enough for me to consider using them, given their subscription costs, the possibility of becoming dependent, and being too niche.





  • Let me make sure I understand the background info. Before things crashed, you had two machines that shared a two-way laser serial link, and so your testing involved sending from one machine to the other, as a way to exercise the TUN driver. Now that the second machine is dead, you wish to light up a spare two-way laser serial link. But rather than connecting to the second (dead) machine or some third machine, this spare link is functionally a “loop back” to the existing machine, the one that’s still alive. And you wish to continue your testing with this revised setup, to save yourself from having to commute to the office just to reboot the second machine.

    Do I have that right? If so, firstly, it’s a Saturday in all parts of the world lol. But provided that you’re getting sufficient rest from work, I will continue.

    As it stands, you are correct that the Linux machine will prefer to pass traffic internally, when it sees that the destination is local. We can try to defeat this, but it’s very much like cutting against the grain. This involves removing the kernel stack’s tendency to route packets locally, but only for the traffic going to/from the TUN interfaces. But if you get this wrong, you might lose access to the machine, and now you have 0/2 working machines…

    IMO, a better solution would be to move at least one of the TUN interfaces into its own “network namespace”. This is the Linux kernel’s idea of separate network stacks, and is one of the constituent technologies used to enable containers (which are like VMs but more lightweight). Since you only require the traffic to exit on one TUN netif and come back in on the other TUN netif, this could work.

    First, you create a new namespace (I’ll call it bobby), then you move tun11 into the bobby ns, and then you run all your commands in a shell that’s spawned within the bobby ns. The last part means you have access to all the files and your filesystem, but because you’re in a separate network namespace, you will not see the same netifs that would show up in the “default” namespace.

    Here are the commands, but you can check this against this reference too:

    ip netns add bobby
    ip link set tun11 netns bobby
    ip netns exec bobby /bin/bash
    

    From inside this shell, that’s how you access tun11 (and only tun11). You’ll want to open a second SSH connection to your remote machine, which will naturally be in the “default” namespace and will allow you access to the tun10 netif (but not tun11).

    Good luck!


  • Using an MSP430 microcontroller, I once wrote an assembly routine that (ab)used its SPI peripheral in order to stream a bit pattern from memory out to a GPIO pin, at full CPU clock rate, which would light up a “pixel” – or blacken it – in an analog video signal. This was for a project that superimposed an OSD onto the video feed of a dashcam, so that pertinent vehicle data would be indelibly recorded along with the video. It was for one heck of a university project car.

    To do this, I had to study the MSP430 instruction timings, which revealed that a byte could be loaded from SRAM into the SPI output register, then a counter incremented, then a comparison against a limit value in a tight loop, all within exactly 8 CPU cycles. And the SPI completes an 8-bit transfer every 8 SPI clock cycles, but the CPU and SPI blocks can use the same clock source. In this way, I can prepare a “frame buffer” of bits to write to the screen – plenty of time during the vertical blanking interval of analog video – and then blast it atop the video signal.

    I think I ended up running it at 8 MHz, which gave me sufficient pixel resolution on a 480i analog video signal. Also related was the task of creating a set of typefaces which would be legible on-screen but also be efficient to store in the MSP430’s limited SRAM and EEPROM memories. My job was basically done when someone else was able to use printf() and it actually displayed text over the video.

    This MSP430 did not have a DMA engine, and even if it did, few engines permit an N-to-1 transaction to write directly to the SPI output register. Toggling the GPIO register directly was out of the question, due to taking multiple clock cycles to toggle a single bit and load the next value. Whereas my solution was a sustained 1 bit per clock cycle at 8 MHz. All interrupts disabled too, except for the vertical and horizontal blanking intervals, which basically dictated the “thinking time” available for the CPU.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldPassword managers...
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.

    That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:

    Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).

    This signature is valid for a single public key.

    If fewer than t participants are dishonest, the entire protocol is secure.


  • Related to moderation are the notions of procedural fairness, including 1) the idea that rules should be applied to all users equally, that 2) rules should not favor certain users or content, and 3) that there exists a process to seek redress, to list a few examples. These are laudable goals, but I posit that these can never be 100% realized on an online platform, not for small-scale Lemmy instances nor for the largest of social media platforms.

    The first idea is demonstrably incompatible with the requisite avoidance of becoming a Nazi bar. Nazis and adjoining quislings cannot be accommodated, unless the desire is to become the next Gab. Rejecting Nazis necessarily treats them different than other users, but it keeps the platform alive and healthy.

    The second idea isn’t compatible with why most people set up instances or join a social media platform. Fediverse instances exist either as an extension of a single person (self-hosting for just themselves) or to promote some subset of communities (eg a Minnesota-specific instance). Meanwhile, large platforms like Meta exist to make money from ads. Naturally, they favor anything that gets more clicks (eg click bait) than adorable cat videos that make zero revenue.

    The third idea would be feasible, except that it is a massive attack vector: unlike an in-person complaints desk, even the largest companies cannot staff – if they even wanted to – enough customer service personnel to deal with a 24/7 barrage of malicious, auto-generated campaigns that flood them with invalid complaints. Whereas such a denial-of-service attack against a real-life complaints desk would be relatively easy to manage.

    So once again, social media platforms – and each Fediverse instance is its own small platform – have to make some choices based on practicalities, their values, and their objectives. Anyone who says it should be easy has not looked into it enough.


  • Reddit has global scope, and so their moderation decisions are necessarily geared towards trying to be legally and morally acceptable in as many places as possible. Here is Mike Masnick on exactly what challenges any new social media platform faces, and even some which Lemmy et al may have to face in due course: https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you-speed-run-the-content-moderation-learning-curve/ . Note: Masnick is on the board of BlueSky, since it was his paper on Protocols, Not Platforms that inspired BlueSky. But compared to the Fediverse, BlueSky has not achieved the same level of decentralization yet, having valued scale. Every social media network chooses their tradeoffs; it’s part of the bargain.

    The good news is that the Fediverse avoids any of the problems related to trying to please advertisers. The bad news is that users still do not voluntarily go to “the Nazi bar” if they have any other equivalent option. Masnick has also written about that when dealing at scale. All Fediverse instances must still work to avoid inadvertently becoming the Nazi bar.

    But being small and avoiding scaling issues is not all roses for the Fediverse. Not scaling means fewer resources and fewer people to do moderation. Today, most instances range from individual passion projects to small collectives. The mods and admins are typically volunteers, not salaried staff. A few instances have companies backing them, but that doesn’t mean they’d commit resources as though it were crucial to business success. Thus, the challenge is to deliver the best value to users on a slim budget.

    Ideally, users will behave themselves on most days, but moderation is precisely required on the days they’re not behaving.


  • Used for AI, I agree that a faraway, loud, energy-hungry data center comes with a huge host of negatives for the locals, to the point that I’m not sure why they keep getting building approval.

    But my point is that in an eventual post-bubble puncture world where AI has its market correction, there will be at least some salvage value in a building that already has power and data connections. A loud, energy-hungry data center can be tamed to be quiet and energy-sipping based on what’s hardware it’s filled in. Remove the GPUs and add some plain servers and that’s a run-of-the-mill data center, the likes of which have been neighbors to urbanites for decades.

    I suppose I’d rehash my opinion as such: building new data centers can be wasteful, but I think changing out the workload can do a lot to reduce the impacts (aka harm reduction), making it less like reopening a landfill, and more like rededicating a warehouse. If the building is already standing, there’s no point in tearing it down without cause. Worst case, it becomes climate-controlled paper document storage, which is the least impactful use-case I can imagine.



  • Absolutely, yes. I didn’t want to elongate my comment further, but one odd benefit of the Dot Com bubble collapsing was all of the dark fibre optic cable laid in the ground. Those would later be lit up, to provide additional bandwidth or private circuits, and some even became fibre to the home, since some municipalities ended up owning the fibre network.

    In a strange twist, the company that produced a lot of this fibre optic cable and went bankrupt during the bubble pop – Corning Glass – would later become instrumental in another boom, because their glass expertise meant they knew how to produce durable smartphone screens. They are the maker of Gorilla Glass.


  • I’m not going to come running to the defense of private equity (PE) firms, but compared to so-called AI companies, the PE firms are at least building tangible things that have an ostensible alternative use. A physical data center building – even one located far away from the typical metropolitan area that have better connectivity to the world’s fibre networks – will still be an asset with some utility, when/if the AI bubble pops.

    In that scenario, the PE firm would certainly take a haircut on their investment, but they’d still get something because an already-built data center will sell for some non-zero price, with possible buyers being the conventional, non-AI companies that just happen to need some cheap rack space. Looking at the AI companies though, what assets do they have which carry some intrinsic value?

    It is often said that during the California Gold Rush, the richest people were not those which staked out the best gold mining sites, but those who sold pickaxes to miners. At least until gold fever gave way to sober realization that it was overhyped. So too would PE firms pivot to whatever comes next, selling their remaining interest from the prior hype cycle and moving to the next.

    I’ve opined before that because no one knows when the bubble will burst, it is simultaneously financially dangerous to: 1) invest into that market segment, but also 2) to exit from that market segment. And so if a PE firm has already bet most of the farm, then they might just have to follow through with it and pray for the best.


  • I presume we’re talking about superconductors; I don’t know what a supra (?) conductor would be.

    There are two questions here: 1) how much superconducting materials are required for today’s state-of-the-art quantum computers , and 2) how quantum computers would be commercialized. The first deals in material science and whether more-capable superconductors can be developed at scale, ideally for room-temperature and thus wouldn’t require liquid helium. Even a plentiful superconductor that merely requires merely liquid nitrogen would he a bit improvement.

    But the second question is probably the limiting factor, because although quantum computers are billed as the next iteration of computing, the fact of the matter is that “classical” computers will still be able to do most workloads faster than quantum computers, today and well into the future.

    The reality is that quantum computers excel at only a specific subset of computational tasks, which classically might require mass parallelism. For example, breaking encryption algorithms is one such task, but even applying Shoe’s Algorithm optimally, the speed-up is a square-root factor. That is to say, if a cryptographic algorithm would need 2^128 operations to brute-force on a classical computer, then an optimal quantum computer would only need 2^64 quantum operation. If quantum computers achieve the equivalent performance of today’s classical computers, then 2^64 is achievable, so that cryptographic algorithm is broken.

    If. And it’s kinda easy to see how to avoid this problem: use “bigger” cryptographic algorithms. So what would quantum computers be commercialized for? Quite frankly, I have no idea: until such commonly-available quantum computers are available, and there is a workload which classical computers cannot reasonably do, then there won’t be a market for quantum computers.

    If I had to guess, I imagine that graph theorists will like quantum computers, because graphs can increase in complexity really fast on classical machines, but is more tame on quantum computers. But the only commercial applications from that would be for social media (eg Facebook hires a lot of graph theorists) and surveillance (finding correlations in masses of data). Uh, those are not wide markets, although they would have deep pockets to pay for experimental quantum computers.

    So uh, not much that would benefit the average person.



  • For anyone that eats instant noodles regularly, or drinks coffee made using an Aeropress or a pour-over, or making Jello, or any other application where the water must already be boiling hot before adding, the electric kettle is king.

    It also avoids the quandary with having to carefully move a potentially-open top cup full of boiling water from the microwave to wherever it is needed. Some Japanese electric kettles are even fully thermally insulated and proofed against tip-over. These units require a positive actuation of a trigger in order to dispense; tilting the kettle isn’t enough.

    And finally, using an electric kettle does not temporarily cause radio interference in the 2.4 GHz spectrum, with attendant WiFi and Bluetooth signal reductions.