Category: Blog

Your blog category

  • One network monitoring tool to rule them all

    One network monitoring tool to rule them all

    Remember when something broke down and you’ve learned about it the hard way?

    Imagine this: you’re happily cruising through your digital day when, suddenly, the Wi-Fi decides it’s time for a dramatic exit – poof! Gone. And replacing it, there are tens of employees asking what happened with the internet connection.

    That’s where a trusty network monitoring tool comes in – think of it as your digital Sherlock Holmes. It’s always on the lookout, sniffing out issues before they turn into internet mysteries that leave you scratching your head.

    And in the line-up of these digital detectives, Uptime Kuma is definitely my MVP. It’s not just free and open-source – it’s also a breeze to set up, and its web interface? Let’s just say it could win awards for both functionality and style.

    So, in this guide, let’s go ahead and set it up on a Linux machine.

     
    Step 1: Installing Node.JS in Linux
    Begin by logging into your server and updating the local package index.
    $ sudo apt update
    Since Uptime Kuma is written in Node.JS, you must install Node.JS. Release 16.x is a good choice since it supports most Ubuntu versions.
    Switch to the root user.
    $ sudo su
    Add the Nodesource 16.x repository to your system using the curl command.
    $ curl -sL https://deb.nodesource.com/setup_16.x | sudo bash
    Once added, install Node.JS using the package manager.
    $ sudo apt install nodejs -y
     
    Step 2: Installing Uptime Kuma in Linux
    After Node.JS installation, proceed to install the Uptime Kuma monitoring tool. Clone the Uptime Kuma repository from GitHub.
    # git clone https://github.com/louislam/uptime-kuma.git
    Navigate to the Uptime Kuma directory.
    # cd uptime-kuma/
    Set up the monitoring tool.
    # npm run setup
     
    Step 3: Run Uptime Kuma with pm2
    PM2 is a NodeJS process manager. Install it using the following command within the uptime-kuma directory.
    # npm install pm2@latest -g
    Start the pm2 daemon.
    # pm2 start npm --name uptime-kuma -- run start-server -- --port=3001 --hostname=127.0.0.1
    Enable Node.js application to start after a reboot.
    # pm2 startup
    Save the application state.
    # pm2 save
     
    Step 4: Configure Apache as a Reverse Proxy for Uptime-Kuma
    Install Apache web server.
    $ sudo apt install apache2 -y
    Enable required Apache modules.
    # a2enmod ssl proxy proxy_ajp proxy_wstunnel proxy_http rewrite deflate headers proxy_balancer proxy_connect proxy_html
    Create a virtual host file for Uptime Kuma.
    $ sudo nano /etc/apache2/sites-available/uptime-kuma.conf
    Paste the following code and specify your server’s Fully Qualified Domain Name or public IP address.
    <VirtualHost *:80>
    ServerName kuma.name.com
    ProxyPass / http://localhost:3001/
    RewriteEngine on
    RewriteCond %{HTTP:Upgrade} websocket [NC]
    RewriteCond %{HTTP:Connection} upgrade [NC]
    RewriteRule ^/?(.*) “ws://localhost:3001/$1" [P,L]
    </VirtualHost>
    Activate the Apache virtual host.
    $ sudo a2ensite uptime-kuma
    Restart Apache.
    $ sudo systemctl restart apache2
     
    Step 5: Access Uptime Kuma from the Web UI
    • With Uptime Kuma installed, visit your server via the domain name or IP address in a browser.
    • http://server-ip or domain-name
    • Create an Admin account by providing a username and password.
    • Log in to Uptime Kuma’s dashboard. To monitor a new host, click ‘Add New Monitor’ and provide the site details.
    • Add item to Monitoring
    • Uptime Kuma will start monitoring your site and display uptime metrics.

    And that’s it! You have successfully installed and configured Uptime Kuma and your monitoring is up and running. Now, you’ll be the one saying, ‘Hey, there was a network issue today, but don’t worry, I took care of it.

  • First steps in reclaiming your online privacy

    First steps in reclaiming your online privacy

    to hack, fraud, map-7109362.jpg

    Why does it feel like tech titans know more about your life than you do?

    The reason that most big companies make their software free is because they track, collect and sell their user information data for advertisement agencies.

    Privacy is the ability to keep your activity to yourself, like writing a personal journal. Anonymity, in contrast, is when people can see what’s happening, but don’t know it’s you doing it. An illustrative example would be creating graffiti on that old building in your hometown during the night. Everyone can see the result, but no one knows who did it.

    This article will focus on giving some free alternative tools that can be used to improve the privacy side of things.

    Browser


    Browsers like Microsoft Edge or Chrome can collect information like your browsing history, usernames, passwords, location, etc. And no, the incognito mode won’t stop this data being collected.

    My recommendation:

    Brave

    Brave is a free open source chromium-based browser that is very privacy-focused right out of the box. By default, it will block ads and trackers, and it’s also customizable and fast. 

    I’ve chosen it over Firefox because it is better out of the box, but with the right customization Firefox can become as desirable as Brave or even better.

    PS: make sure that in settings that the used search engine is Brave or DuckDuckGo.

    VPN


    The most important features of a VPN is hiding your traffic from the ISP and spoofing your location, which is useful for accessing region locked content such as Netflix shows.

    Mostly, all other advertised features are just marketing.

    My recommendation:

    Proton VPN

    Proton is the only well known VPN provider that I would recommend if you consider getting one. It is open source and the company has a fairly clean security and privacy record.

    Password manager


    What if you use the same e-mail and password for all of your accounts, and hypothetically Facebook has a data breach?

    Well, most likely it will get to the dark web and now every malicious actor knows your Facebook log-in details.

    However, because you used the same e-mail and password pretty much everywhere, they can access all your online services.

    The solution is to use a different password for each service, but who can remember tens of different complicated passwords? I certainly cannot, here’s where a password manager comes into play.

    My recommendation:

    Bitwarden

    Bitwarden is a free open-source service, I’ve been using it for a couple of years now, and I’d be probably 2 years biologically older if they weren’t around.

    Not only that I can generate a unique secure password (I’ve been using 64 length characters passwords lately, good luck guessing that one) but it also synchs to all my devices.

    As a bonus, the autocomplete feature fills the username and password in a flash using a click on my smartphone or just a keyboard shortcut on my laptop. Saves a lot of time in the long run.

    Cloud storage


    How would you feel if all your holiday and cat pictures be on a stranger computer? Again, I have bad news for you: this is exactly what’s happening.

    The most private (and economical option in case you’re paying for cloud storage) is buying and storing your data on a physical hard drive.

    Especially in today’s day and age when they can be as small as a wireless headphone case.

    If you’re like me however, you just cannot live without cloud storage

    My recommendation:

    MEGA

    Mega gives 20GB storage and is end-to-end encrypted, It is convenient as it’s browser based, and it’s encrypting and decrypting during the transfer process which is something that not many providers do.

    These are the tools I think everyone should use as a baseline, not only for privacy but also security. The nature of the closed source software of the multi-billion-dollar companies is unverifiable and the only assurance we get is: we respect your privacy and have the best security… trust me bro. However we see vulnerabilities and data breaches happening all the time. And lastly, I think the first 3 recommendations offer a better user experience anyway. So go ahead and give them a try, you can thank me later : )
  • How would you fix a network issue?

    How would you fix a network issue?

    question mark, sign, problem-1872634.jpg

    How do you approach it? Do you panic? Or do you start with the unofficial level 1 support greeting, “Did you try turning it off and on?”

     

    “You don’t have a place to start if you’re put in the middle.”

    That being, said, this 6-step approach that works best for me:
    1. Identify the problem
    2. Establish a theory
    3. Game plan
    4. Implement
    5. Verify
    6. Documentation update (This step typically makes people laugh until one day when they realize that an issue they currently face could have been fixed much faster if the documentation was up-to-date. At least that’s what my colleagues say; personally, I have documented everything from day one in my career, I swear.)

     

    Remember that when identifying the issue, there are three approaches:

    a) top-bottom
    b) bottom-top
    c) and divide and conquer

    The latter one is usually the most time-efficient, while the first is the most consistent and has never failed me in finding the root cause of the issue.

     

    Let’s put this reasoning to the test and consider the following scenario:

    Users of PC1 and PC2 are reporting that they cannot access the corporate web page. This, along with a network diagram-style topology, is all the information you have. You figure out that the web page is hosted on the HTTP Server located in the internal data centre network. An inverse DNS search shows that the server’s IP address is 10.1.1.100.

     

    If you’re following the strategy, the process should look something like this:

    Step 1: Identify the issue. Let’s use a bottom-up approach.
    Layer 1 (Physical Layer) Testing:

    Power Status: Verify that all devices are powered on, including the PCs, the HTTP Server, Multilayer Switch S1, Multilayer Switch S2, and the Firewall (FW1).
    Physical Connectivity: Check that PC1 and PC2 are physically connected to Multilayer Switch S1, and the HTTP Server is connected to Multilayer Switch S2. Ensure that the cables are plugged in securely at both ends.
    LED Status: Observe the LED link status indicators on the relevant ports on S1 and S2. If any LED is off, test the cable with a cable tester.

    Layer 2 (Data Link Layer) Testing:

    Neighbour Visibility/Link: On the switches, check the ARP cache to ensure that the MAC addresses of the PCs and the HTTP Server can be seen. This confirms Layer 2 connectivity.
    VLAN Configuration: Ensure that the correct VLANs are assigned to the switch ports connected to PC1, PC2, and the HTTP Server. Based on the topology, all devices should be on VLAN 10.
    Spanning Tree Protocol (STP): Check the STP status on S1 and S2 to ensure there are no weird things happening.
    EtherChannel: It’s worth checking if any are configured and, if so, that the configuration is correct.

    Layer 3 (Network Layer) Testing:

    IP Configuration: Verify the IP settings on PC1 and PC2. Both should have IP addresses within the 10.1.2.0/24 network, with the default gateway set to the internal interface of FW1 (10.1.2.254). Confirm the HTTP Server’s IP configuration is also correct.
    DHCP Settings: Check the DHCP server configuration and lease assignments to ensure the scopes are correct.
    Ping Test: From PC1 and PC2, ping the IP address of the HTTP Server and other hosts in the 10.1.1.X network to test connectivity. Since everything seems to work, we move forward.
    Access-List Check: After thoroughly reviewing the access control lists (ACLs) on the Cisco Firewall (FW1), we discover that there appears to be a issue.
    An entry in the ACL is denying HTTP traffic from the internal network (10.1.1.0/24) to the HTTP Server’s IP address:
    #access-list 100 deny tcp 10.1.1.100 0.0.0.255 host 10.1.1.100 eq 80

     

    Step 2: Establish a Theory

    The ACL entry that was found is denying HTTP traffic from the 10.1.1.0/24 network to the HTTP Server at 10.1.1.100. Since the ping test from PC1 and PC2 to the HTTP Server works, we know that network IP connectivity exists, but the HTTP traffic is being blocked. This incorrect ACL entry is likely the cause of the issue.

    Step 3: Game Plan

    Let’s try to remove the erroneous ACL entry and replace it with one that permits HTTP traffic. 
    It’s also a good time to review all ACL’s for any other potential misconfigurations.

     

    Step 4: Implement

    Since the devices are Cisco, we need to do the following:
    Log in to FW1 with the necessary credentials.
    Enter privileged EXEC mode by typing ‘enable.’
    Enter global configuration mode by typing ‘conf t.’
    Remove the incorrect ACL entry by typing ‘no access-list 100 deny tcp 10.1.1.100 0.0.0.255 host 10.1.1.100 eq 80.’
    Insert the correct ACL entry by typing ‘access-list 100 permit tcp 10.1.1.100 0.0.0.255 host 10.1.1.100 eq 80.’
    Exit global configuration and save the configuration by typing ‘write memory.’

    Step 5: Verify

    Test connectivity from PC1 and PC2 to the web server page.
    Since we dealt with access lists, I strongly recommend checking all services and device statuses in the organization. This is where automation shines.

     
    Step 6: Documentation Update

    In this scenario, make sure to have an up-to-date running config backup.
    Additionally, see if logs from when the change that created the issue in the first place are still available, so you have the upper hand in case you need a favour from the person who messed up.