• 0 Posts
  • 41 Comments
Joined 10 months ago
cake
Cake day: December 27th, 2023

help-circle

  • well for e2ee you obviously have to let one e encrypt the data for the other e. (good luck with newsletters then) for usual services kindly asking them to support either s/mime or gpg for outgoing emails, that would at least make them know the wish, but good luck there too.

    i think the already mentioned solution with encrypting incoming messages on your side just before mda to your inbox should be the closest possible to what op wants. one would need to check if the message is already encrypted and skip encryption for those.

    if you only want the admin of that email (imap) server to not be able to read all emails, maybe placing a separate encrypting server (smtp+encrypt+forward) inbetween outside world and your email imap server could be a solution.

    one should have a look into the logfiles too as some mailers might log message subjects and of course sender/recipients along with ip adresses of incoming/outgoing servers which the op might not want to be readable as well (i dont know protonmail that much)

    also gpg IMHO allows for sign-then-encrypt hiding the signature within the encrypted data which could be wanted. also one might want to look exactly what parts of the messages contents and its headers are encrypted or plaintext on the server before feeling safe from the threat one wants to be protected from.



  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.

    the point is not a lack of examples doing wrong but where one wants to be heading towards.


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Lol what???

    wouldn’t that be the definition of stable?

    the computer on voyager 2 is running for 47 years now, they might have rebooted some parts meanwhile but overall its a long time now, and if the program is free of bugs the time that program can run only depends on the durability of the hardware, protection from cosmic rays (which were afaik the problems the voyager probes faced mostly, not bugs) which could be quite long if protected from hazardous environments and maybe using optoelectronics but the point is that a bug free software can run forever only depending on hardware durability and energy supply, in any other way no humans are needed for a veery long time ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 month ago

    However, systemd makes the system much more secure and reliable as it is

    less secure and less reliable day-by-day you meant? systemd introduces needless dependencies ever since as if that was it sole intention ever from its very beginning, which already were used for wide attacks, and exactly those attacks that the people working hard to remove unneeded dependencies for security reasons meant to prevent by things like “do one thing only” (but security was not the number 1 reason for this one i think), systemd instead: ‘lets add another level of that exponential dependency tree from the insecurity hell’ felt like they did this stupid thing intentionally every month for a decade or more.

    and stability… if you don’t monitor what systemd does, you’ll never know how bad it actually is. i’ve made custom scripts to monitor systemd’s failures (failing in doing a very primitive of its job) and there are hundreds (actually varying around 200 to 300 sometimes more) of such per day on all our systems for one particular(!) measurement only that was breaking service stability and i wrote a measure-and-fix+monitor workaround. other fixes were not monitored however, only silently fixed by workarounds, thus just unnumbered systemd bugs/instabilities in the dark that stole a lot of work capacity…

    if you run distros with systemd, unreliability is your daily experience unless you don’t really care or have never experienced stability before - like running a service (a single process) for 8 years without any interruption then it suddenly stops and you go like “was it maybe an attack? the process died, how could that be? were there any connects from outside at that moment?” not talking about not updating something that long, but “stability” itself CAN be like if you dont stop it, it’ll still run in 10000+ years maybe millions, more likely that humans extincted themselves way earlier than of a process “just dying” by a bug… while systemd even randomly stops things that were running well for no reason (varying) once a month more or less (also varying in what it actually randomly stops, sometimes (2 times) it even stopped ssh on my servers, me asking myself if i should create yet another workaround for systemds buggyness to not locking me out again from network or ratjer go for the real solution for most* of all systemd problems - *see below) on the few standard installs i personally have as i didn’t have the way to automatically replace provider installed distro on VMs in the DC. i want this replacing automatically for the same reason why i don’t like systemd, it causes manual work for a thing that should go automated. however due to systemd’s perpetuated instability i now managed to have this way, and every second working on getting rid of systemd is worth it 100k times. this however does not solve all systemd-introduced problems as the xz attack showed (a systemd-dependency on xz made the infected xz library beeing useful-for-the-atracker during compiletime of sshd binary with which then the attacker could infect the newly built sshd binary),one could still be attacked through systemd’s dependency hell even if one does not use systemd by oneself, but the build machines used for your distro could be affected/infected by systemd’s needless dependencies when “also” compiling for systemd-affected distributions thus there is the risk of becoming a victim of needless-systemd-dependencies while not using systemd at all. however the attack through systemd dependency (and that the public solution was not the removal of needless dependencies only included as source for superflous third party “needs”) made clear that systemd is an overall problem for security that will not be solved quickly but stay just like all windows insecurities will stay as long as they whish to push them to their “users”.

    systemd reducing overall security and its unreliability combined with some builtin impediments (i.e. when debugging its defects) is what drove me away from systemd. there are solutions way more stable and way more secure (and way better documented btw) that do not call in for needless dependencies, reducing risks, attack vectors and increases overall debuggability i.e. by deterministic behaviour as an easy example. and none of its important (to me) promises have been fulfilled yet by systemd, drop-in-replacement? have heared that lie thousands of times, but in the last decade i have not experienced it a single time in a distro and it does not seem to be included/finished any more.

    for windows users or windows admins a linux with systemd on it IS an improvement in stability, security and of course for updating, yes. but all of that does not come from systemd, rather the opposite is the case, systemd reduces it month by month, thats my experience and thats the most important experience for me, idc what lies whitdepapers tell or what broken promises are believed by anyone or the masses, i want secure and stable servers and services and systemd does not fit in for any of these goals and the time it was still “young” and early problems could be accepted in the hope they get fixed soon are gone, but without those fixes having ever appeared.


  • smb@lemmy.mltoMemes@lemmy.mlDear iPhone users:
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    the OS was not the comparison, but the hardware it runs on (just as @Freefall said) but also you seem to be wrong with your other assumption:

    And both those devices are tied to a specific OS.

    Which seems not to be the case as install instructions for another OS can be found here (i didn’t try it though) for the mentioned device:

    https://wiki.lineageos.org/devices/pdx215/

    lineage os still is an “android”, but another vendor with clearly different approach than the original firmware and what hinders you from writing bsd drivers and compiling a bsd kernel for it instead? So i count the Xperia 1 III as NOT bound to any OS or OS vendor.

    But despite the way longer possible support/security, freedom of choice and endless other possibilities that often come along with free OS choice, this pure and great advantages weren’t even mentioned there, thus it wasnt an OS comparison as it also wasn’t a bound-to-an-OS vs. absentness of vendor-lock-in-limitation-jungle comparison.


  • i plan to get a similar setup (music on homeserver, synced to phone for offline use) but i dont need to sync playlists as i rarely use them, i have a streaming account with one(!) playlist with all the songs i remembered and wanted to listen to but didn’t buy as CD back then and use the radio like streaming options a lot.

    but for syncing phone with nextcloud i use FolderSync (Pro) and it works as it should. it has lots of possible sync targets and lots of options to sync one or both ways. i have folders with >8000 files that take some time to sync but it works fine in the background with no prob, i let it sync over mobile network too, cz i value a more reliable in-sync status more than bandwidth. however i didn’t really try “immediate sync” for new/changed files yet as i don’t see the need for this but its one of many options.

    however i only use nextcloud sync in one or two-way syncs and once used sftp for a one-way sync, so i cannot judge all the other options, but if your playlists are organized in files, their two-way sync might be as easy as with the songs. i bought the pro version on their website so my license is not bound to a google account.


  • maybe there was a mixup of individual datapoints and individual persons.

    lets see if that could fit.

    as far as i read things in this thread, the whole security is based on exactly these datapoints: Full Name, Date of Birth and SSN (three datapoints) plus username and password for 3 sites (six datapoints) makes 3+6= 9 datapoints per person.

    2.9 billion (us) should be 2.900.000.000 (correct me if i’m wrong, but where i live one “billion” is actually “1.000.000.000.000” thus a “bit” more)

    divided by 9 those 2.9billion would be ~ 320 million.

    on wikipedia they say the us had 331 million people in 2020…

    that would fit like an ass on a bucket! lol just to mention that.

    have a nice day!


  • you should definitely know what type of authentication you use (my opinion) !! the agent can hold the key forever, so if you are just not asked again when connecting once more, thats what the agent is for. however its only in ram, so stopping the process or rebooting ends that of course. if you didn’t reboot meanwhile maybe try unload all keys from it (ssh-add -D, ssh-add -L) and see what the next login is like.

    btw: i use ControlMaster /ControlPath (with timeouts) to even reduce the number of passwordless logins and speed things up when running scripts or things like ansible, monitoring via ssh etc. then everything goes through the already open channel and no authentication is needed for the second thing any more, it gets really fast then.


  • The whole point of ssh-agent is to remember your passphrase.

    replace passphrase with private key and you’re very correct.

    passphrases used to login to servers using PasswordAuthentication are not stored in the agent. i might be wrong with technical details on how the private key is actually stored in RAM by the agent, but in the context of ssh passphrases that could be directly used for login to servers, saying the agent stores passphrases is at least a bit misleading.

    what you want is:

    • use Key authentication, not passwords
    • disable passwordauthentication on the server when you have setup and secured (some sort of backup) ssh access with keys instead of passwords.
    • if you always want to provide a short password for login, then don’t use an agent, i.e. unset that environment variable and check ssh_config
    • give your private key a password that fits your needs (average time it shoulf take attackers to guess that password vs your time you need overall to exchange the pubkey on all your servers)
    • change the privatekey every time immediately after someone might have had access to the password protected privkey file
    • do not give others access to your account on your pc to not have to change your private key too often.

    also an idea:

    • use a token that stores the private key AND is PIN protected as in it would lock itself upon a few tries with a wrong pin. this way the “password” needed to enter for logins can be minimal while at the same time protecting the private key from beeing copied. but even then one should not let others have access to the same machine (of course not as root) or account (as user, but better not at all) as an unlocked token could also possibly be used to place a second attacker provided key on the server you wanted to protect.

    all depends on the level of security you want to achieve. additional TOTP could improve security too (but beware that some authenticator providers might have “sharing” features which could compromise the TOTP token even before its first use.


  • My theory is that you already have something providing ssh agent service

    in the past some xserver environments started an ssh-agent for you just in case of, and for some reason i don’t remember that was annoying and i disabled it to start my agent in my shell environment as i wanted it.

    also a possibility is tharlt there are other agents like the gpg-agent that afaik also handles ssh keys.

    but i would also look into $HOME/.ssh/config if there was something configured that matches the hostname, ip, or with wildcards* parts of it, that could interfere with key selection as the .ssh/id_rsa key should IMHO always be tried if key auth is possible and no (matching) key is known to the ssh process, that is unless there already is something configured…

    not sure if a system-wide /etc/ssh/ssh_config would interfere there too, maybe have a look there too. as this behaviour seems a bit unexpected if not configured specially to do so.



  • i once had to look at a firefall appliance cluster, (discovered, it could not do any failover in its current state but somehow the decider was ok with that) but when looking at its logs, i discovered an rsh and rcp access from an ip address that belonged to a military organisation from a different continent. i had to make it a security incident. later the vendor said that this was only the cluster internal routing (over the dedicated crosslink), used for synchronisation (the thing that did not work) and was only used by a separate routing table only for clustersync and that could never be used for real traffic. but why not simply use an ip that you “own” by yourself and PTR it with a hint about what this ip is used for? instead of customers scratching their head why military still uses rcp and rsh. i guess because no company reads firewall logs anyway XD

    someone elses ip? yes! becuase they’ll never find out !!1!

    i really appreciate that ipv6 has things like a dedicated documentation address range and that fc00:/7 is nicely short.


  • ipv6 in companies… ipv6 is not hard, but for internal networking no company (really) “needs” more than rfc1918 address space. thus any decision in that direction is always “less” needed than any bonus for (da)magement personnel is crucial for the whole companies survival…

    for companies services to be reachable from outside/ipv6 mostly “only” the loadbalancers/revproxies etc need to be ipv6 ready but … this i.e. also produces logs that possibly break decades old regexes that no one understands any more (as the good engineers left due to too many boni payed to damagement personnel) while other access/deny rules that could break or worse let through where they should block (remember that 192.168. could the local part of ipv6 IF sone genious used a matching mech that treats the dot “.” as a wildcard as overpayed damagement personnel made them rush too fast), could be hidden “somewhere”. altogether technical debt is a huge blocker for everything, especially company growth, and if no customer “demands” ipv6, then it stays on the damagement personnels list as “fulfilling the whishes of engineers to keep them happy” instead of on the always deleted “cleaning up technical debt caused by damagement personnel” list.

    setting up firewalls for ipv6 is quite easy and if you go the finegrained “whitelisted or drop/block” approach from the beginning it might take a bit for ipv6 specials to be known to you, but the much bigger thing is IMHO the then current state of firewall rules. and who knows every existing rule? what rules should be removed already and must not be ported to ipv6? usually firewalls and their rules are a big mess due to … again too many boni payed to damagement personnel, hindering the company from the needed steps forward…

    ipv6 adoption is slow for reasons that are driving huge cars that in turn speed up other problems ;-|


  • maybe start with an adjustable setup:

    • rent a cheap vm, i got one for 1€/month (for the first year,cancel monthly) from ovh currently
    • setup 3 openvpn instances to redirect all routes through the tunnel, one with ipv4 only, one with ipv6 only and one with both
    • setup the client on your mobile phone and your laptop both with all three vpns to choose from
    • have the option to choose now and try out ipv6, standalone or dualstack depending on what vpn you switch on
    • use this setup to blame services that don’t support ipv6 yet or maybe are broken with dualstack 🤣
    • rise from under-the-stone (disabling ipv6 only) to in-sunlight (to a well-above-industry-standart-level !!! “quick” new network technologies adopting “genious”) 🤣
    • improve your openvpn setup from above to be reachable “by” ipv6 too if you haven’t done it from the beginning, done: reach the pro-level of the-late-adopter-noob-group

    (if you want, ask for config snippets)

    btw i prefer to wait for ipv8😁 before “demanding” ipv6 from services i use 🤣


  • smb@lemmy.mltoPrivacy@lemmy.mlDoes MATRIX recipients know my IP?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    4 months ago

    a public room is public. anyone could and should be able to enter it at any moment start recording and uploading everything to $terrorist@/or$three-letter-agency or such. The idea that someone else could also get the same already public data later is not threatening, as that data is already considered public as in “everyone in the world could have it a second after the data came into existance”. and also as removing from the public is not considered possible, uploading that already intentionally published data again does not pose a greater threat than its first publication, but uses just a bit of bandwidth, not more. if you are very sensitive about visibility of who you talk with, maybe don’t enter “public” rooms in the first place.

    if you join a private room, you already want to share with the other participants that you are f***ing talking to them, including when and who you exactly encrypted the data for, when, and to which servers they have to be forwarded. i expect the server of all participants to forward messages to the recipients. for this the server needs to know this type of information. Of course awareness, which data is used to make i.e. routing decisions is a good thing, but a “nightmare” would be teams zoom icq, whatsapp and similar. i am sure that messengers exist that could be less traceable for participants, but full anonymity to who you are communicating with so that even the servers know nothing about what happens in a room is imho not even a goal of matrix for the future.

    Not a “nightmare”, but what a nightmare it must be to find out that a system that looked so promising did not fulfill “every” dreamexpectation one had with options that are even the opposite of ones dreamexpectation like “public rooms”. that are meant to be public! how horrible!!!(lol)

    by the way -as it seems possibly noteworthy here - if you exchange emails with someones @gmail address, then google has all of your mail histories metadata, as well as the server of your provider has. just to mention, do not send emails to @gmail.com if you dislike google knowing about it. and if you share a document with edit history, then the edit history is likely also shared ;-) As “rooms” in matrix are meant to have a state that changes from the beginning sometimes possibly with every message and one can answer to a message which would reveal the existance of that message later when answered on, including at least a hint of what it was about, such information is imho meant to to be rather complete than hidden. maybe 1:1 chat solves this issue for you, as every chat with a new other person would start empty.

    i might be wrong, but matrix already is one of the most robust systems when it comes to “compromised servers”. so very far away from a nightmare. that is unless you are either a true criminal bastard or a true world saving hero, then every leaked byte might be the deadly one, that is true.

    So in case you are a true world saving hero: Maybe use a self build raspberry pi mesh proxy chain mounted on rooftops delivered by drones at night to proxy the signal of an in-memory-only-tasks-raspi to a free wifi, where the raspi that has its orders is using battery (like the rooftop proxy chain) but is hidden in a public transport to reach the proxy mesh by the transportations timetable. just to give a paranoic one some ideas and some work to do ;-) If you’ve build everything, then upload the code to github and designs to thingiverse so that “anyone” could have placed the proxy mesh to a free wifi on the rooftops, so you be more secure from beeing suspected ;-) lol btw a mesh system to accomplish this already exists, i think they named it b.a.t.m.a.n. (no joke) protocol, so the main struggle should be handling of solar power vs wifi signal strength, distances, humidity and windproof mount design beeing able to be deployed by manually controlled quadrocopters. good luck!


  • hm you have a point that it might not have been removed completely, but the problem with that point that i personally have is that this reached me too late to just believe it was really never removed. For some reasons i would not believe blindly in “evidences” that are in control of the one that is in question and could manipulate it later for such claims and also was experienced to not be trustworthy for what they say…

    saying that, there are ways to check if something was there at a time or not. the one source i know that could help here only seems to store records from 29th jun 2023 18:44:33 onwards which is too late for this.

    https://web.archive.org/web/20240000000000*/https://abc.xyz/investor/google-code-of-conduct/

    you are right, it does not make a difference in if they can be trusted, but it makes a difference in why not and what to expect if you do so despite the red flags or -as a gov- just let things go on. A person who by accident was speeding should maybe be treated differenrly than a person who intentionally(!) does so while risking others lifes. and what would be more proof of intention than a written statement or removed canary? thus such a statement does make a difference in terms of they just cannot handle their stuff, don’t care at all or maybe even have evil intentions.

    examples:

    some kids making a fire in the forest cause they don’t know the risks

    vs.

    some young adults making a fire in the woods cause they just don’t care despite knowing the risks

    vs.

    a company making fire in the woods because its cheaper to do stuff there and they lack the resouces to do it safe and someone else will pay the firefighters anyway.

    vs.

    a company stating to want to do so cause they like it despite they could afford doing it secure but just no one could or would sue them anyway.

    while i don’t want to say google is like no.4 here, to me these examples all make huge differences, no matter if the woods actually cought fire or not.