Sunday, April 23, 2023

How-to decrypt TLS traffic of a Java application running on localhost behind a proxy server with wireshark

See github repo: https://github.com/wmmnpr/how-to/blob/main/wireshark-decrypt-ssl/README.md



Wednesday, March 22, 2023

Getting Ionic 6 to run on iOS 12+


The issue was to get ionic 6 running on an old iOS 12.5.7. 

At the time of writing this, ionic 6.20.9 and capacitor 4 where the latest versions. By rolling back to capacitor 3, it was possible to deploy the app to the old version 12 phone.

The nodejs version used was v18.14.0

Below is the app shown listing the results of scanned devices using cordova-plugin-ble-central plugin.


As described here,  install ionic with:

npm install -g @ionic/cli

ionic --version

6.20.9

Then create a ionic project (Angular was chosen for this project)

ionic start

After the project has been created, remove version 4 of capacitor and install version 3 (https://capacitorjs.com/docs/v3/getting-started) as follows:

npm uninstall @capacitor/ios @capacitor/cli @capacitor/core --save

npm install  @capacitor/ios@latest-3 @capacitor/cli@latest-3 @capacitor/core@latest-3 --save

You should see 3.9.0 when running:

npx cap --version 

3.9.0

Then add the ios platform as described here: https://capacitorjs.com/docs/v3/ios

npx cap add ios

In the ios/App/Podfile file, you should see:

platform :ios, '12.0'

Now, build and deploy the project.

npm run build

npx cap sync

npx cap open ios

To add the plugin run:

npm i cordova-plugin-bluetoothle --save







Tuesday, March 21, 2023

A concept2 Remote Racing Prototype Architecture

This article is about a prototype which was created to connect together concept2 PM5 erg computers via the internet. The system makes it possible to remotely race each other in groups or races without a central organiser. 

A mobile app created with Ionic (see picture below) is used to connect to the local PM5 via Bluetooth. The app, called Ergregatta, sends data collected from the local PM5 computer to a remote server as well as displays realtime race information about other participants received from the remote server. 


The Ergregatta messing server is a Spring Boot app with a single web socket endpoint. The web socket handling is implemented Java-WebSocket from TooTallNate; a very nice, simple and clean implementation.

The server groups racers into races by using the raceId, a md5 hash of the username hosting the race, which submitted as a query parameter to the web socket endpoint before the connection upgrade.  

By hosting the server in a Kubernetes cluster, as shown below, it possible to get horizontal scaling of the servers, by using Istio to route web socket requests with the same raceId to the same server instance or pod (see below).



The installation show with kubectl. One can see the ergregatta message pods running.


This is how the setup looks with Kiali, which is provided by the istio installation.



However, even better than Kiali, is a Grafana dashboard which shows the actual routing of the requests by pod.



Sunday, March 6, 2022

Deploying a microservice to Docker-Desktop's Kubernetes on Windows using Helm

What follows is a description of how to deploy a Springboot websocket microservice to a Kubernetes cluster running in Docker-Desktop. The prerequisites for the deployment are:

  1. The Springboot app has been dockerized
  2. Docker-Desktop (4.4.3) has been installed and Kubernetes (v1.22.5) enabled.
  3. kubectl (v1.22.5) has been configured to connected to k8 cluster started with Docker-Desktop
  4. Helm(v3.8.0) has been installed.
Create a subfolder in the Springboot project directory called helm. Below one see the result in Intellij and the contents of the simple Dockefile as well.


Change into the helm directory and run:

>helm create name-of-project   //here ergregatta-messaging-service


The Helm Chart Template Guild is a good place to learn about contents of the generated folder.

In the values.yaml file in the newly generated folder there were only a few things that had to be changed. The first and most obvious one being the "image.tag", shown below. It was set to point to the image in my local Docker image repository which came as a result of running docker build command.



Other changes, included, setting "ingress.enabled" to true and setting the "ingress.className" to "nginx". Which brings us to the next step.  Before deploying the microservice, one must deploy an ingress controller, which in this case was ingress-nginx. The helm command to do so was:

>helm install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace

I also had to change the targetPort in the service.yaml file to match that of the port configured in the Dockerfile.

Also the hostname at which the microservice (k8 service) will be available to the outside needs to be configured in the values.yaml. The value in this case was:

ingress.hosts.host: rowwithme 

However, the hostname will only function properly if one, on Windows, adds it to the C:\Windows\System32\drivers\etc\hosts file. As in:

127.0.0.1 rowwithme 

Now, deploy the microservice by running:

>helm install rowwithme ergregatta-messaging-servic


Ok, maybe I should have used wscat but with the above screenshot one can see that the request is being forwarded to microservice running the the pod listening on port 5000

With kubectl once can see that ingress mapping has been processed by the ingress controller because othe the "HOSTS" column show below.













Sunday, February 13, 2022

Debugging Apache Airflow with debugpy and VS Code while running in Docker

To be able to debug Apache Airflow using Visual Studio Code, we first want to build a Docker image from the sources. Start by cloning the apache airflow github repository and then open the folder using VS Code.

In the Dockerfile, change the following values:

ARG AIRFLOW_INSTALLATION_METHOD="."

ARG AIRFLOW_SOURCES_WWW_FROM="airflow/www"
ARG AIRFLOW_SOURCES_WWW_TO="/opt/airflow/airflow/www"

ARG AIRFLOW_SOURCES_FROM="."
ARG AIRFLOW_SOURCES_TO="/opt/airflow"

Then build the image with the new settings and then run it but overriding the entry point:


docker build -t my-image:0.0.1 -f Dockerfile .

docker run -p 8080:8080 -p 5678:5678 --entrypoint /bin/bash -it my-image:0.0.1

In the container, first install debugpy.


pip install debugpy

We'll need the installed location of the airflow code for our launch.json configuration file. You can find it by running:


python -m pip -V
pip 21.3.1 from /home/airflow/.local/lib/python3.7/site-packages/pip (python 3.7)

Now, while in the running container, start airflow, here I'll just call --help, in the container with:


python -m debugpy --listen 0.0.0.0:5678 --wait-for-client -m airflow --help

In the cloned repository directory, create a launch.json file in the .vscode directory. The value for remoteRoot should be taken from the output of the "python -m pip -V" above 


    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Remote Attach",
            "type": "python",
            "request": "attach",
            "justMyCode": false,
            "connect": {
                "host": "localhost",
                "port": 5678
            },
            "pathMappings": [
                {
                    "localRoot": "${workspaceFolder}/airflow",
                    "remoteRoot": "/home/airflow/.local/lib/python3.7/site-packages/airflow"
                }
            ]
        }
    ]
}

In VS Code open the __main__.py file in the apache folder of the project and place your break points. Now run the debug using the launch.json file:


Of course the "Here we go!!!!" is from me :>)


Sunday, January 30, 2022

Deploying Airflow on AWS eks and exposing the webserver UI for learning purposes.

This description assumes that you already have a AWS account. It reveals nothing new and only extracts or copies the instructions created by some real pros (unlike the hacker of this how-to) from the following links:






Most of the cluster work was done on an Amazon Linux instance with kubectl, eksctl and helm installed.

First create an eks cluster as follows:

eksctl create cluster \

--name dev-apps \
--region eu-central-1 \
--version 1.21 \
--nodegroup-name linux-nodes \
--nodes 1 \
--nodes-min 1 \
--nodes-max 2 \
--with-oidc \
--ssh-access \
--ssh-public-key ergregatta-20200928 \
--managed

 What follows is not necessary, however, it is nice for learning purposes to install the kubernetes dashboard. To do so following the instructions here or skip down to where we install airflow with helm:

https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html

Here you will have to run the kubectl proxy from your laptop and to do so you’ll need the aws cli as well as kubectl installed.

 
$aws eks update-kubeconfig --region eu-central-1 --name dev-apps
 

This will configure the local kubectl to work with the cluster created above.

Then to get a token to use later for accessing the console in the browser run this (here run in gitbash):

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Name:         eks-admin-token-rswtg
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: eks-admin
              kubernetes.io/service-account.uid: 01be9965-5fd5-469e-97e6-6bb6e0c5c5f9
 
Type:  kubernetes.io/service-account-token
 
Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im0wUXZrTE1NeTZNdHlNd3B4U25UOGI2aTVyc2tpUl9BNDJ3M2k1ZGYtQ1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJla3MtYWRtaW4tdG9rZW4tcnN3dGciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZWtzLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDFiZTk5NjUtNWZkNS00NjllLTk3ZTYtNmJiNmUwYzVjNWY5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmVrcy1hZG1pbiJ9.fnHn42Z5gNJ3ZzGebIo7Fo1t8gd1EsGXdDtq9TxcZalcGICRPd3B-8Kwy8CT4-qYEDrNTX27heDTbIJmgod5eZxbFDZMTPyzRKcuk_T1TXFiSfCLRi4wtlWgT-E_5EIJTteqbWk-GvrmTr1O4vmIzNA-8Y4d2sinEGYbESmT8jOK26KmwPKuizKxrzZGYSIL9so3cHuSRe-33IeS0XYR1rk7uU2NDTAGSKMA3-wYLk9heSVdReMfDC__DKlRGR6GMb18jxqi5C08mqJyR7DPVjnR4WTpAh9MO-7SqEQiW6MEsWmHgDbHFIPYg_TN7xPDp3fT5pbbBR70jX8ka2sFog
ca.crt:     1066 bytes

 

Then start the proxy

 
$kubectl proxy
 

and go to localhost as instructed in the instructions and use the token above to log into the dashboard.

Now, that the optional installation of the dashboard is done, let’s return to installing airflow into the eks cluster by using the helm chart as described here:

https://airflow.apache.org/docs/helm-chart/stable/index.html

If you have the dashboard installed, then you can browse around and see all the components which have been installed for airflow.

Next we have to make the service available through the internet which we shall to because exposing the airflow-webserver k8 service by following these instructions:

https://www.eksworkshop.com/beginner/130_exposing-service/exposing/

Replace the namespace, in the above instructions, with “airflow” and the service with “airflow-webserver”. With this done, you should be able to access the airflow-webserver via http and login with admin/admin (non of which is secure)

To delete everything just run:

 
$eksctl delete cluster --name  dev-apps^
 

and say good-bye.

Sunday, May 2, 2021

Windows 10, bleno and WinUSB

I am using bleno to simulate a Bluetooth device for a mobile app I am working on. The problem is that for bleno to work on Windows one must install the WinUSB driver and if the WinUSB is installed rather the the default driver then it's hard, on a Dell latitude, to power on Bluetooth because the control button will not appear in the Action Center menu as shown in the following image.



Because the button was missing, I didn't have any information about whether the device was on or off. I only knew that Bluetooth must be off because a "poweredOff" event was being triggered during the start up of my application. Attempts to power on the Bluetooth with Power Shell also failed. 

The steps I am about to describe below should help one be able to work with bleno on Windows without too many hassles.

First make sure the default Bluetooth driver is installed and that the Bluetooth button appears in the Action Center menu as shown above and that Bluetooth is on.  In the Device Manager, it should look as follows:



Next install WinUSB; 
preferably using Zadig.


After installation, the Device Manager should appear as follows and it should be possible to open the Bluetooth device using bleno.



When one is done working with bleno, then roll back the WinUSB driver. This way the Bluetooth on/off button will be visible should the Bluetooth device turn itself off once the laptop is closed or shutdown. Following these above steps allowed me to work with bleno without too many hassles and without annoying computer restarts.