• 32 Posts
  • 155 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle


  • Yes .docx.

    It appears as though the encoding is missing in such a way that nothing in Linux recognizes the file. The underlying CLI tools don’t have a way of converting the file. I tried with Python’s docx tool and with iconv. It has to be encoding related because some tools initially load the file with several sets of Asian characters instead of English. However, there is no hexadecimal or sections of entirely binary looking data. Archiving tools do not open up the the file to reveal anything else like a metafile or header. Neo vim shows garbled nonsense throughout. Bat warns of binary. Python won’t load the file, nor will Only Office. Libre Office and Abi Word load initially with Asian characters before crashing.

    The only option is likely gong to be setting up the W10 machine and converting a bunch of files within it.

    Ultimately, my old man thinks he can be an author all of the sudden and is trying to write. He’s not very capable of learning. I’m not confident that he can learn to use FOSS to do the same thing he has been doing. This post was just to see if there are options I am not already aware of that might actually work in practice. I can easily do everything I need in FOSS. I can do everything he needs to do. I’m more concerned about becoming his tech support when he forgets how to copy pasta. He already fails to separate the internet hardware connectivity from the web browser and operating system within his mental model of technology.











  • Normally, I would be quite skeptical of what could be involved, and indeed my ability to diagnose the cause is limited. It is somewhat speculative to draw a conclusion. However, the machine is always behind this whitelist firewall, the only new software on the system was the llama.cpp repo and nvcc, and I’ve never encountered a similar connection anomaly.

    I tried to somewhat containerize AI at first, but the software like Oobabooga Textgen defeated this in their build scripts. I had trouble with some kind of weird issue related to text generation and alignment. I think it is due to sampling but could be due to some kind of caching persistence from pytorch? I’ve never been able to track down a changing file so the latter is unlikely.

    I typically only use regular FF for a couple of things, including Lemmy occasionally. Most of the extra nonsense on the log is from regular FF. Librewolf is setup to flush everything and store nothing. It only does a few portal checks an hour for whatever reason. I should look into stopping it. With regular FF I just don’t care or use it for much of anything. I just haven’t blocked it in DNF.







  • Arch is a foundational distro like Gentoo. It is not fully configured and requires considerable configuration to be secure in ways that are not clearly defined. Pacman is an abysmal package manager for anyone short of a CS degree. Changes are regularly made that assume considerable knowledge of systems and otherwise require in depth reading. This often involves peripheral packages of no interest the the end user and where an incorrect choice may require loading from backups.

    In the last 10 years I have run most of the major distros. Arch is the only time I have ever had to actually use my backups. The third time, I just moved on. Arch does a terrible job of describing these types of changes, and it is deeply frustrating to actually use Arch for some detailed project, need to install a package, have some tangent peripheral update that requires input, and get stuck in a rabbit hole of research because of the tangent. Arch will not make anything about this easy or clear like with Gentoo. Arch will dump you into some dev’s magnum opus of a wiki article that still does not layout the issue you are faced with. That article will put the user into a bottomless fractal links chasm of articles as a result, because nothing about Arch is tutorial.

    Yes Gentoo has some binary packages. People run RHEL and Debian-stable as a desktop; they run Fedora as a script server; they use a steamdeck as a desktop. What a distro is known for is just a stereotype. It still applies and the exceptions do not change the primary use case.

    The important choice right now IMO is how you protect your bootloader, but I’m a foolish intermediate level user at best. I do my best to tell people the counter point to the evangelical believers distro. It was by-far my worst experience with any distro and pushing beginners into that experience is downright toxically counter productive.


  • Arch is not really a DIY distro and certainly not for noobies. Those saying so are either naive or a troll. Arch is an excellent resource and it has its uses. Arch assumes you have a full understanding of a POSIX operating system and all components. Arch has an encyclopedic wealth of great information. What Arch is not, is beginner friendly, or remotely tutorial. If you try to use Arch to learn how a POSIX system works, you are going to have a very bad experience. This is about like me handing you the encyclopedia and telling you to go learn physics.

    For a beginner, start with Fedora or Ubuntu because they won’t wreck your system. Fedora is on the cutting edge while Ubuntu is stable — which only means that most packages are frozen and will not stay updated in the latest version. It means many things are old and outdated, but you can write a high level script that will never be broken by some change that happens in a software library in a random update. Windows is also a stable (outdated) operating system so that companies can write software that will not get broken because they fail to maintain it regularly.

    Gentoo is a tutorial distro. It is the compile everything yourself and learn how everything works distro. Gentoo must be kept up to date, but if the Portage package manager needs input from the user, the Gentoo packagers layout the details in a very approachable way, assuming you are competent enough to make it through a Gentoo installation in the first place. Most people take days to weeks to make it through their first full Gentoo installation and configuration. This is guided, but there is an enormous amount to take in and figure out. You’ll be compiling and configuring everything from the bootloader and kernel all the way up. If you are at this level of competence and understanding, where you can run Gentoo, THAT is when you should consider running Arch. Arch is basically Gentoo, but without the compiling and configurations. All the components are easily accessible in binary form.

    If you run Gentoo for awhile and you really want to understand what the packagers are doing on an even deeper level, then do a Linux From Scratch build (LFS) this is a thing too.

    If you want to learn in situ and actually use a system, use Fedora, get The Linux Bible if you want to learn sysadmin, and leverage the RHEL documentation for more advanced stuff because REHL is the original distro, and where a lot of the core kernel devs come from. RHEL is down stream of Fedora.

    Every distro has a distinct reason it exists. It is foolish to turn them into team sports like brands. A key part of the learning curve is figuring out why each distro exists and leveraging that knowledge. I use them all for various reasons and know which to use for documentation. I may use Fedora as my base, but I have Arch, Ubuntu, and Gentoo that run in containers on my host machine, and have run all three as my base system in the past.


  • So software like CAD is funny. Under the surface, 3d CAD like FreeCAD or Blender is taking vertices and placing them in a Cartesian space (X/Y/Z - planes). Then it is building objects in that space by calculating the mathematical relationships in serial. So each feature you add involves adding math problems to a tree. Each feature on the tree is linearly built and relies on the previously calculated math.

    Editing any changes up tree is a massive issue called the topological naming problem. All CAD has this issue and all fixes are hacks and patches that are incomplete solutions, (it has to do with π and rounding floating point at every stage of the math).

    Now, this is only the beginning. Assemblies are made of parts that each have their own Cartesian coordinate planes. Often, individual parts have features that are referencing other parts in a live relationship where a change in part A also changes part B.

    Now imagine modeling a whole car, a game world, a movie set, or a skyscraper. The assemblies get quite large depending on what you’re working on. Just an entire 3d printer modeled in FreeCAD was more than my last computer could handle.

    Most advanced CAD needs to get to the level of hardware integration where generalizations made for something like Wayland simply are not sufficient. Like your default CPU scheduler, (CFS on Linux) is setup to maximize throughput at all costs. For CAD, this is not optimal. The process niceness may be enough in most cases, but there may be times when true CPU set isolation is needed to prevent anything interrupting the math as it renders. How this is split and managed with a GPU may be important too.

    I barely know enough to say this much. When I was pushing my last computer too far with FreeCAD, optimising the CPU scheduler stopped a crashing problem and extended my use slightly, but was not worth much. I really needed a better computer. However looking into the issue deeply was interesting. It revealed how CAD is a solid outlier workflow that is extremely demanding and very different from the rest of the computer where user experience is the focus.



  • Secure boot must have all kernel modules signed. The system that Fedora uses is a way that builds the drivers from source with every new kernel update. It works, but it can’t be modified further.

    The primary issue you will likely come across is that the nvcc compiler is not open source and it is part of the CUDA chain. You can’t build things like lama.cpp without nvcc and have CUDA support. Most example type projects have the same issues. Without nvcc fully open, you are still somewhat limited. Also the toolchain for nvcc screws up the open source built stuff and will put you back at the train wreck of secure boot. If Nvidia had half a working brain, they would open source everything instead of the petty conservative nonsense stupidity that drives proprietary fools. There is absolutely no room in AI for anyone that lacks full transparency.