Generate phone number to share with strangers

With the COVID-19 disease, we are struggling all around the world. Maybe you are thinking about helping others, e.g. with doing the groceries. Naturally, we think about our own families first. But we also want to help others, maybe even strangers. If you worry about posting your personal phone number for those who seek help, here is a free-of-charge solution (at least, when you live in Germany).

Register with satellite

If you have an old android phone (maybe with iPhones or even a cheap amazon fire tablet, see FAQ below) even with no SIM card, you can still use it to register another valid mobile number. Just install the satellite app provided by sipgate via googles play store (direct link). While creating an account, you will get a new phone number and start the verification process. Unfortunately this process is done by sending letters. Yes, actual letters – it’s not how satellite team wants it, but it is required by the German Federal Network Agency (Bundesnetzagentur), because you will get a valid mobile phone number.

Register with Signal and/or whatsapp

After receiving your verification letter (Deutsche Post is still delivering :)) and finishing the process, you are reachable by others (who are not using the satellite app) via phone calls. Unfortunately receiving SMS is currently not possible with satellite, but who uses it anyway? There are services like whatsapp or Signal. So let’s register our new number with Signal (the messenger I always prefer for security and privacy reasons). Again, install the app via google play store (direct link). You have to verify your new phone number, this time with a 6-digit verification code. As you are not able to receive SMS, you have to wait about 1 min, until the “call my number” option gets active. You will get a phone call from an US number (starting with +1) in your satellite app. It’s just a machine, reading the 6-digit verification code for you. There is a similar process for whatsapp.

Now you have new phone number and you are able the get phone calls, call somebody else up to 100 minutes and send and receive messages with Signal and/or whatsapp! So, when posting your new number somewhere else for strangers, just add the information that they have to call you, or use Signal/whatsapp.

FAQ

  • Does this work in other countries besides Germany?
    As far as I know, it does not, because you need a valid German postal address to get the verification letter by satellite.
  • Why do I have to use Signal?
    You are not able the receive SMS with your satellite mobile number, so you need another service. But all services, which allow verification via phone call, are suitable. I for myself still prefer Signal.
  • This should work on iPhones too!
    I think it does work, but I don’t own an iPhone, so I can’t verify.
  • What about the cheap amazon fire tablet?
    That’s not easy, but possible. The following description will include some technical terms, which are not explained, you have to figure it our by yourself, from the links provided.
    First you have to install the play services on your tablet and install everything from there. Or – if this does not work – get the apks for satellite and Signal. I have done it with Aurora (a google play store client for anonymous access and with a direct download feature) installed from F-Droid (an alternative app store for android, with FLOSS-apps). In Aurora I have downloaded satellite app only with an already in use German google account. You get the manual download in the app details => 3 dot menu. Build number 100643 worked for me. There is no download process bar, so just be patient. Signal worked only by simulating the Xiaomi Redmi Note 3 device, build number 6132.
    Afterwards you have to sideload them to your fire tablet. You can do this with adb, or (my prefered method) upload the apk in your dropbox, nextcloud, whatever, download it on the fire tablet and try to install it. When you are asked about installing from unknown sources, you are probably doing it right. You might search for the downloaded apk again: it’s in the Documents app -> local storage.
    Now you can go back to the original posting.

3/3 Finishing the mail migration

1/3: Mail migration from shared hosting to server with mailcow
2/3 Cut over plan for servers and mails

The whole cut over plan had 50 steps, including setting up mailcow, configuration of domains and mailboxes, backups and migration of all mails.

Most important was a migration documentation for my users (not really technical), so they could easily change to the new mail domain. But everything worked fine. And the final todo with the fail2ban? I’ve managed to do this via a script. I already had a script running on one of my raspberries, which checks for changes of the WAN IP on my fritz box. If a change is detected, the new IP is written to a file and transferred to my root servers.

On the server running with mailcow, this file is monitored and if changed, an update to mailcow is send via curl and API. Unfortunately it broke some days ago and I am still trying to fix it.

Post fail2ban IPs to mailcow

#!/usr/bin/env bash
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
cd $SCRIPT_DIR
source mailcow-fail2ban-config.cfg

newip=$(cat $FILENEWPATH$FILENEWIP)
api_call_string="{\"ban_time\":\"1800\",\"max_attempts\":\"10\",\"retry_window\":\"600\",\"netban_ipv4\":\"24\",\"netban_ipv6\":\"64\",\"whitelist\":\""

api_call_string="${api_call_string}$LOCALPUBLICIP"

for ip in $WHITELISTIPS; do
api_call_string="${api_call_string}\r\n$ip"
done

newip=$(cat $FILENEWPATH$FILENEWIP)
api_call_string="$api_call_string\r\n$newip\",\"blacklist\":\"\"}"

#echo $api_call_string
curl -X POST https://$TARGET_HOSTNAME/api/v1/edit/fail2ban -d attr="$api_call_string" -H "X-API-Key: $MAILCOW_APIKEY"

Check for changes of the “new ip file”

2>/dev/null 1>&2 inotify-hookable -f tmp/homeip.txt -c '/home/USERNAME/app/server-scripts/broadcast-ip/mailcow-fail2ban-config.sh' &

2/3 Cut over plan for servers and mails

Read part 1 about home setting up mailcow worked and how I locked out myself with monit.

MX records
Because I already knew how to use mailcow, setting up all the configs again was easy. Transferring the old mails (those already received) was described in part 1, but doing the correct cut over for the domains wasn’t a simple copy. After setting up the new domains and accounts on mailcow, I added a MX record on my old hosting provider. It pointed to a domain which terminated on my mailcow server, to be more specific: on the FQDN mail.example.org not just example.org. Giving it a high priority (low number, 5 in my case) led to mails already transferred to the new server!

Afterwards I registered my old domain on a new hosting provider (not the one running my mailcow server) and was eager to set up the correct DNS records there. Unfortunately I had to do the whole AUTH code thing to transfer the domain, before altering the DNS settings. So after registering I had to terminate the domain contracts with my old provider, get the AUTH code and provided it to the new domain provider. When their setup was done (but DNS not yet refreshed), I had to add a MX record again: mail.example.org with prio 10 and the provider’s server as backup (relay). To check if the DNS settings were already available to the public I checked on my local machine, the aws testing VM and my second server with nslookup:

nslookup
set type=MX
example.com

mail.example.org 10

Note: You have to check for example.com (my mail domain) and get the answer mail.example.org (the mail server).

When doing some test mails I realized something funny: some mails already arrived at the new server, some did not. Why was that?

Randomly old and new server?
First I thought it was really random but when one of my mailbox users called me I got the right clue. He told me that he could not sent mails to me from the new server. That was caused by a misconfiguration: I already set up another domain in mailcow, but did not transfer it already. So the mailcow server knew this domain, thought it could be delivered locally but failed.
That was exactly the same behavior as on the old hoster: mails to addresses already transferred where still tried to deliver locally, but that failed. As a solution I removed the second (old) MX record and just waited for the whole domain transfer to be finished. After that all mails were transferred to the new server without any issues.

Last part: Finishing and getting fail2ban to work.

1/3: Mail migration from shared hosting to server with mailcow

Intro:
I have registered my first domain nearly 20 years ago. I tried to develop with MS FrontPage but switched to Marcomedia Dreamweaver as fast as possible. However I was never satisfied with the quality of the results, so I started to learn HTML, CSS and some JavaScript, later PHP and SQL. 10 years ago this became my profession: after finishing my studies I started as a web developer (with Java) and till today I am in this business as project manager and consultant for large e-commerce systems.
After switching the hoster in the early years, I have stayed with one for almost 15 years now: registration for all my domains, doing my mails and… nothing more because I did not host real website anymore. A few years ago, I started to re-think my digital privacy: maybe using my own domain for emails on a shared host wasn’t enough? And using google calendar and dropbox seemed to be convenient but not private. So I decided to rent a real root server and do my own stuff – with encryption on the server for sure.

Setting up my root server
I used mailcow for at least 1 year on 2 root servers with different domains. Everything was fine, my greatest fear of losing mails did not happen. Setting up mailcow and using it with letsencrypt certs was no problem at all. But unfortunately I have chosen the last version not shipped with docker containers. Further development was canceled and I had to switch to the dockerized version. As I wasn’t familiar with docker, I set up a VM and tinkered a bit with docker. Not very happy with what I saw (maybe because of the old docker version, but somehow also because I didn’t get what was happening inside the containers), I postponed the transfer to mailcow-dockerized.

Once I’ve decided to move from the old hoster to already in use root servers, I tried AWS for a demo of mailcow-dockerized with my own setup. As I am not very keen neither with aws nor with mail services, I had to read a lot. I already knew all the basics about what is a relay server, an MX record, an MTA, postfix and so on, but not how to use everything to best way. But setting everything up really satisfied me, so I started to do a migration from my old hoster to my root servers. Everything was very easy, aside from the mails. When I realized that the migration from “traditional” mailcow to dockerized was old and had lots of manual steps, I opted for re-install one of my servers which only had the old mailcow and monit running.

I installed ubuntu server 18.04 LTS (as it was before but upgraded for 16.04 LTS), set up mailcow-dockerized (as already tried on aws) and re-installed monit again. Configuring the correct domain was no problem at all, restoring the old mails was easy too: I had a ubuntu desktop VM with a Thunderbird installation, copied all the mails to local folders before re-installing the server and copied them back after re-configuring the old mailboxes (only 3) again. After that, I re-installed my monit configs and added checks for imaps and stmps and went to bed.

Monit: fail, success, fail, success

Just a few minutes after going to bed, my monit checks failed for https (the mailcow frontend), soon for imaps and stmps, too. I did not check for the reason before getting ready to work in the morning, but could not determine something with a short glance. From my office I just let the syslog and htop running and realized that the (server-)internal monit instance did not fail as the monit instance which checked the services of the server from my home. So I vpn’ed from my “investigation” VM to my home router and checked the home monit which is running on a raspberry pi. Nothing odd… But there was a pattern: about 30min everything was fine, that 30min connecting refused, everything fine and again 30min later connection refused. But suddenly I had the configuration UI (https) for mailcow running in one browser window and failing in the other. The failing one was in the “investigation” VM, vpn’ed to my home, the succeeding one was from my workplace notebook’s host system. Finally I realized: the monit from the server itself and the second monit from my home were blocked because of the integrated fail2ban! I just did not add the server’s own IP in the whitelist and managed to use the wrong IP from my home.

So the setup was ready but I had one more task: managing the whitelist via API or something similar, as my ISP changes my home IP with every dial in (luckily no auto-disconnect every 24h).

Second part: setting up a whole “cut over” plan and transferring domains and mails.

reading internet IP from fritz.box

There a several ways to obtain the WAN IP from your Fritz!Box. Most ideas did not work for me, either because I got a message about unsupported functions or missing authentication. Finally I added a new user in the FritzBox (OS Version 6.8X in Fritz!Box 7590) and got the IP with the following script:

#!/usr/bin/env bash
source broadcast-ip.cfg

echo $ROUTERUSER

request_ip='curl --anyauth --user $ROUTERUSER:$ROUTERPASS http://fritz.box:49000/upnp/control/wanpppconn1 -H "Content-Type: text/xml; charset="utf-8"" -H "SoapAction:urn:dslforum-org:service:WANPPPConnection:1#GetExternalIPAddress" -d "<?xml version=\"1.0\" encoding=\"utf-8\"?> <s:Envelope s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\" xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\"> <s:Body> <u:GetExternalIPAddress xmlns:u=\"urn:dslforum-org:service:WANPPPConnection:1\"></u:GetExternalIPAddress> </s:Body> </s:Envelope>"'
#echo "#### REQUEST ####"
#echo $request_ip
#echo "#### RESPONSE ####"
newip=$(eval $request_ip | sed -n -e 's#^.*<NewExternalIPAddress>\(.*\)</NewExternalIPAddress>.*$#\1#p')
echo "current IP: $newip"

oldip=$(cat $FILEOLDIP)
echo "Having old IP: $oldip"
#having a new ip => storing it for later comparison
if [[ "$newip" != "$oldip" ]] ; then
  echo "$newip" > $FILEOLDIP
  echo "Stored new IP: $newip"
  scp -P$SERVERPORT $FILEOLDIP $SERVERUSER@$SERVERADDRESS:$FILENEWIP
fi

broadcast-ip.cfg

ROUTERUSER="username"
ROUTERPASS="password"
SERVERUSER="user"
SERVERADDRESS="example.net"
SERVERPORT="22"
FILEOLDIP=oldip.txt
FILENEWIP=homeip.txt

The highlighted line 19 in shell script copies the new file via ssh to a server which needs it. My open Todos are automating the request for the ip (triggered by the fritz box or by a cronjob) and later the usage of the ip on the server.

OMV4 samba share with Sonos

Somehow the music stored on my NAS was still accessible via VLC on my computer or on my mobile devices. Unfortunately SONOS could not receive my music lib anymore. Although the old paths were still correct. First I tried to get some info from the samba logs, but changing log level from 0 to 3 had no effect after restarting the smbd. Changing it via web interface did it: log level AND syslog had to be set to 3 and afterwards the logs appeared in the syslog (and only there).

Jun 16 08:38:00 gragas smbd[9351]: ntlm_password_check: NTLMv1 passwords NOT PERMITTED for user MYUSER
Jun 16 08:38:00 gragas smbd[9351]: [2018/06/16 08:38:00.748581, 2] ../source3/auth/auth.c:315(auth_check_ntlm_password)
Jun 16 08:38:00 gragas smbd[9351]: check_ntlm_password: Authentication for user [MYUSER] -> [MYUSER] FAILED with error NT_STATUS_WRONG_PASSWORD

So SONOS still does not provide support for modern samba sharesntlm auth = yes had to be added to /etc/samba/smb.conf to get it working again, but this is a security issue. Fix it, sonos!

Upgrading to OMV4, part 2

After restoring all my data on 2 new disks (raid1), I still had some more trouble: the old raid setup wasn’t coming back after several tries. So I had to format both disks (quick mode did not work) again before creating a new raid1 and setting up the ext4 file system again. Afterwards I moved some of the data copied to the new disks back to the old ones, now all the network shares used for VLC, Sonos or for backups a running again under the old network shares.

In addition I had some issues with an php error: every 30min the monit spammed the following message via mail:

PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20131226/pam.so' - /usr/lib/php5/20131226/pam.so: cannot open shared object file: No such file or directory in Unknown on line 0

Getting rid of it seemed to be easy: Just install php5-pam and everything works. Unfortunately php5 is no longer available within the stretch repos. But than I asked myself whether I am still using php5 or not, maybe for the nextcloud installation? Erm, no – I just fixed errors there by using php7. So purging all php5 packages did it!

Upgrading to OMV4 with pain

A few weeks ago the openmediavault project announced that version 3 will go to end-of-life at end of June 2018. So I decided to upgrade my NAS. With the update from OMV3 to 4, the OS was also upgraded from jessie (end of support in 2018-06, too) to strech. Although it is not recommended, I upgraded and did not do a new install. My main reason with the additional install of nextcloud on the NAS. As expected, some things went wrong:

  • OMV4 did not recognize the filesystem on my raid-1 setups. Disks: available, raid-setup: available, filesystem: missing
    Creating new file-systems would mean formatting first, thus loss of all data. As it was stated in the forums, that for some users this also happened with a new install I did not bother with fixing the problem. As two of the “old” disks had several block errors for a long time and were asking for replacement, I just bought 2 WD Red 2 TB, put them in the NAS and created a new ext4 FS on the raid1-config. Mounting the old disks on my notebook worked like this:

    mdadm -A /dev/sdd1
    mkdir restore-point
    mount -t ext4 /dev/md/label01 restore-point
    

    Now I am copying all the data from the old disks to the new ones.

  • Second problem was the upgrade from php5 to php7 for nextcloud. But as I saved the manual installation steps in my own redmine wiki, fixing that was an easy step:
    apt install php5-gd php5-mysql php5-curl php5-intl php5-mcrypt #php modules
    apt install build-essential libsmbclient libsmbclient-dev #dev tools and samba client
    apt install php5-dev php5-pecl-http #php dev and package manager for smbclient

    had to be changed to:

    apt install php-gd php-mysql php-curl php-intl php-mcrypt php-smbclient libapache2-mod-php

    Last step was restoring the data directory and changing the owner to www-data again => works again.

openmediavault mail

I am using openmediavault for about 3 years, but haven’t received any notifications yet. Today I checked the admin panel, did some configurations and found out that one of my 2 TB disks already died, the remaining in the RAID-1 had lots of erroneous sectors. I was aware that I had to replace both of them for a long time – but why did omv not notify me at all?

Turns out, that the configuration for the smtp server did not work in omv and according to this thread is never has for years! As I use port 465 on Thunderbird or K-9 I wasn’t aware of that issue, but switching to 587 also worked for me. I am running mailcow for my mail services.

Renew letsencrypt cert on NAS

There is nextcloud running on my local NAS (with openmediavault3). NC is available via a DynDNS-Domain and a firewall rule in my FritzBox router. As file transfer should be encrypted, I am using letsencrypt certificates for most of my services. Obtaining a new cert was pretty easy, but after adding more configs to my apache the renewal failed:

letsencrypt -d NEXTCLOUD.EXAMPLE.COM
 Detail: Incorrect validation certificate for tls-sni-01 challenge.
 Requested
 f690ef0…237be42.acme.invalid
 from MYISPIP:443. Received 2 certificate(s), first
 certificate had names “NEXTCLOUD.EXAMPLE.COM”

I am still not sure about the reason, but when using certbot everything worked out fine:

sudo certbot certonly -d NEXTCLOUD.EXAMPLE.COM