• 13 Posts
  • 400 Comments
Joined 1 year ago
cake
Cake day: December 28th, 2023

help-circle


  • Helle there ! It’s still Saturday here :p !

    I recently setup weechat (IRC) and learned about bouncers. From what I understand it’s similar to a proxy but with backlogging IRC conversation. I’m still new to it and have a lot a new things to learn.

    I’m thinking to self-host my personal bouncer on some cheap VPS.

    Other than that was busy with encoding with av1an and encode my bluray library to AV1 codec :).

    I also recently self-hosted metube (yt-dlp web frontend) to download some music from RiMusic. Still need to work on a shortcut with HTTPS shortcut on Android !




  • I have my eyes on openSUSE TW for a long time now, but I’m really not fond to learn again a new package management tool :/ (YasT).

    Also, there are some packages I need that are easy to find in the Arch repo (things like av1an, vapourSynth scripts…) And dunno if yast is reliable enough when it comes to build from source.

    I like my EndeavourOS system and just switched to linux LTS. Maybe if this time my system is failing I will give it a try… But I’m not so much of a distro Hopper.

    One question though… Does OpenSUSE TW uses Calamares as their installation media? Because that’s going to be a no go… They drop LVM support rather than fixing the issue, that’s probably to stupidest thing to do :/ There is a Hacky trick to make it work (mount every directory and CD into every one with different shells) but that is not something that should be done when installing something critical as an OS… It feels wrong !

    I know EOS does use calamares, but It’s configurable enough to work arround this issue…


  • N0x0n@lemmy.mltoPrivacy@lemmy.mlUm.... Wtf?
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 days ago

    I have a strange bug where RethinkDNS wireguard session keeps failling after a while if my phone is not used for a while.

    I have to reconnect my wireguard session or it just doesn’t work. I need to ADB and check the logs to see what’s happening and write some kind of bug report to rethink’s DNS bug tracking support.

    It’s not the first time they have some kind of misbehave with their firewall and wireguard tunnel. Other than that, RethinkDNS rocks !!



  • I was in the same boat… I just wanted a simple god damn self-hosted cloudStorage without any nitty gritty or all the bloat that comes with most local/self-hosted cloud solution…

    Syncthing is good, but not really a cloud storage solution (I love syncthing and I use It to sync all my backups !!).

    Give SFTPGo a try :) It also has a WebDAV functionality if you wan’t to use it that way ! It just plain file storage with security features. However, not sure there are any application available, I mostly used it as web application :).


  • Nice tool, thank you ! However I’m bit confused

    If you’re interested in blocking tracking, then best download TrackerControl from here, from F-Droid, or from the IzzyOnDroid F-Droid Repository. If you’re interested in analysing tracking and generating factual evidence of it (e.g. for research), then choose the version from Google Play. The analysis results from this version will usually be more accurate.

    That’s strange… Why have different versions doing different things?




  • I host my own locally and set it to restart the docker every hour like they mentioned in their guide.

    That’s the part why I stay away from Invidious. Why would you need to restart every hour? How inconvenient :/.

    Piped has other issues right now but fixes are underway :)



  • Thank you ! It does actually ticks every use case (for my files) looks pretty rad !

    This might work, but I think it is best to not tinker further if you already have a working script (especially one that you understand and can modify further if needed).

    I totally agree but I will keep your regex as reference, in the near future I will give it a try to decompose you regex as learning process but it looks rather very complex !

    Another user came up with the following solution:

    sed -E ':l;s/(\[[^]]*\]\()([^)#]*#[^)]*\))/\1\n\2/;Te;H;g;s/\n//;s/\n.*//;x;s/.*\n//;/^https?:/!{:h;s/^([^#]*#[^)]*)(%20|\.)([^)]*\))/\1-\3/;th;s/(#[^)]*\))/\L\1/;};tl;:e;H;z;x;s/\n//;'
    

    Just as a little experiment, If you want to spend some time and give me a answer, what do you think? It’s a another way to achieve the same kind of results but they are significantly different. I know there a thousand ways to achieve the same results but I’m kinda curious how it looks from an experts eyes :).

    Thanks again for your help and the time you took to write up a complex regex for my use case ! 👍


  • Hello :) Sorry to pin you, I just gave pandoc a try but it doesn’t work and I had to dig a bit further into the web to find out why !

    Links to Headings with Spaces are not specified by CommonMark and each tool implement a different approach… Most replace space with hyphens other use URL encoding (%20). So even though pandoc looks awesome it doesn’t work for my use case (or did i miss something? Feel free to comment).

    You can give it a try on https://pandoc.org/try/ with commonmark to gfm:

    [Just a test](#Just a test)
    [Just a link](https://mylink/%20with%20space.com)
    [External link](Readme.md#JUST%20a%20test)
    [Link with numbers](readme.md#1.3%20this%20is%20another%20test)
    [Link with numbers](Another%20file%20to%20readme.md#1.3%20this%20is%20another%20test)
    

    If you prefere a cli version:

    pandoc --from=commonmark_x --to=gfm+gfm_auto_identifiers "/home/user/Documents/test.md" -o "pandoc_test.md"
    

  • Wow ! Thank you ! It did a rapid test on a test-file.md

    [Just a test](#just-a-test)
    [Just a link](https://mylink/%20with%20space.com)
    [External link](readme.md#just-a-test)
    [Link with numbers](readme.md#1-3-this-is-another-test)
    [Link with numbers](Another%20file%20to%20readme.md#1-3-this-is-another-test)
    

    Great job ! Thank you very much !!! I’m really impressed what someone with proper knowledge can do ! However, I really do not want to mess around with your regex… This will only call for disaster xD ! I will keep preciously your regex and annotated file in my knowledge base, I’m sure some time in the future I will come back to it and try to break it down as learning process.

    Thank you very much !!! 👍


  • I don’t really have a technical reason, but I do only named volumes to keep things clear and tidy, specially compose files with databases.

    When I do a backup I run a script that saves each volumes/database/compose files well organized in directories archived with tar.

    In have this structure in my home directory: /home/user/docker/application_name/docker-compose.yaml and it only contains the docker-compose.yml file (some times .env/Docker file).

    I dunno if this is the most efficient way or even the best way to do things :/ but It also helps me to keep everything separate between all the necessary config files and the actual files (like movie files on Jellyfin) and it seems easier to switch over If I only need one part and not the other (uhhr sorry for my badly worded English, I hope it makes sense).

    Other than that I also like to tinker arround and learn things :) Adding complexity gives me some kind of challenge? XD


  • Thank you very much for taking your time and trying to help me with comments and all !

    you need a full featured markdown parser for this.

    Do you mean something like pandoc? Someone pointed me to it and it seems it can covert to GitHub-Flavored Markdown ! Thanks for the pointer will give it a try to see how it works out with my actual script :)

    Sorry for the very late response !! Here is the working bash script another user helped me put together:

    #! /bin/bash
    
    files="/home/USER/projects/test.md"
    
    mdlinks="$(grep -Po ']\((?!https).*\)' "$files")"
    mdlinks2="$(grep -Po '#.*' <<<$mdlinks)"
    
    while IFS= read -r line; do
    	#Converts 1.2 to 1-2 (For a third level heading needs to add a supplementary [0-9]) 
    	dashlink="$(echo "$line" | sed -r 's|(.+[0-9]+)\.([0-9]+.+\))|\1-\2|')"
    	sed -i "s/$line/${dashlink}/" "$files"
    
    	#Puts everything to lowercase after a hashtag
    	lowercaselink="$(echo "$dashlink" | sed -r 's|#.+\)|\L&|')"
    	sed -i "s/$dashlink/${lowercaselink}/" "$files"
    
    	#Removes spaces (%20) from markdown links after a hashtag
    	spacelink="$(echo "$lowercaselink" | sed 's|%20|-|g')"
    	sed -i "s/$lowercaselink/${spacelink}/" "$files"
    
    done <<<"$mdlinks2"
    

  • Hello :) Sorry for the very late response !

    Effectively your regex is very close as a one line, I’m pretty impress ! :0 However I missed to mention something In my post (I only though about it after working on it with another user in the comments…). There a 2 things missing on your beautiful and complex regex:

    1. Numbering with dots also needs to have a dash in between (actually I think every special characters like spaces or a dots are converted to a dash )
    FROM
    ---------------
    [Link with numbers](readme.md#1.3%20this%20is%20another%20test)
    
    TO
    ---------------
    [Link with numbers](readme.md#1-3-this-is-another-test)
    
    1. The part before the hashtag needs to keep it original form (links to a real file)
    FROM
    ---------------
    [Link with numbers](Another%20file%20to%20readme.md#1.3%20this%20is%20another%20test.md)
    
    TO
    ---------------
    [Link with numbers](Another%20file%20to%20readme.md#1-3-this-is-another-test.md)
    

    Sorry for the trouble I wasn’t aware of all the GitHub-Flavored Markdown syntax :/. I got a a very cool working script that works perfectly with another user but If you want to modify your regex and try to solve the issue in pure regex feel free :) I’m very curious how It could look like (god regex is so obscure and at the same time it has some beauty in it !)

    #! /bin/bash
    
    files="/home/USER/projects/test.md"
    
    mdlinks="$(grep -Po ']\((?!https).*\)' "$files")"
    mdlinks2="$(grep -Po '#.*' <<<$mdlinks)"
    
    while IFS= read -r line; do
    	#Converts 1.2 to 1-2 (For a third level heading needs to add a supplementary [0-9]) 
    	dashlink="$(echo "$line" | sed -r 's|(.+[0-9]+)\.([0-9]+.+\))|\1-\2|')"
    	sed -i "s/$line/${dashlink}/" "$files"
    
    	#Puts everything to lowercase after a hashtag
    	lowercaselink="$(echo "$dashlink" | sed -r 's|#.+\)|\L&|')"
    	sed -i "s/$dashlink/${lowercaselink}/" "$files"
    
    	#Removes spaces (%20) from markdown links after a hashtag
    	spacelink="$(echo "$lowercaselink" | sed 's|%20|-|g')"
    	sed -i "s/$lowercaselink/${spacelink}/" "$files"
    
    done <<<"$mdlinks2"