Search This Blog

2024/11/09

Ubuntu :Installing Geogebra

I tried to install GeoGebra using flatpak.Just for
experiement i installed flatpak using apt

I was greeted with follwoing error when i run command to install geogebra.

Error:
sangram@sangram-Inspiron-14-5430:~$ flatpak install flathub org.geogebra.GeoGebra

Note that the directories

'/var/lib/flatpak/exports/share'
'/home/sangram/.local/share/flatpak/exports/share'

are not in the search path set by the XDG_DATA_DIRS environment variable, so
applications installed by Flatpak may not appear on your desktop until the
session is restarted.

Looking for matches…
error: No remote refs found for ‘flathub’

Solution:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

Open bashrc file:
sudo nano ~/.bashrc

Add following line into it
export XDG_DATA_DIRS=$XDG_DATA_DIRS:/var/lib/flatpak/exports/share:/home/$USER/.local/share/flatpak/exports/share

Reload bashrc
source ~/.bashrc

verify that the Flathub remote was successfully added by listing Flatpak remotes:
flatpak remotes

Output:
Name Options
flathub system


Now I was able to install using following command
flatpak install flathub org.geogebra.GeoGebra

To launch it run
flatpak run org.geogebra.GeoGebra

If you want to remove this app then run
flatpak uninstall flathub org.geogebra.GeoGebra

Confirm it is deleted by seeing list of installed app
flatpak list

2024/11/04

Ubuntu:.Recover corrupt bashrc

Somehome you messed with .bashrc file & it
start mulfunctioning then you can just backup
and replace it with its working copy.
Usually .bashrc file located in /home/username (sangram is my username).
cd /home/sangram
Backup
cp .bashrc .bashrc_backup

Delete
rm -rf bashrc
First Way:
Now go for https://gist.github.com/marioBonales/1637696
which has default Ubuntu Bashrc file content.copy its
content and do

nano .bashrc

Then add copied content to it.Then

source .bashrc

Second Way:
copy the content of /etc/skel/.bashrc
and paste it into empty .bashrc

Now
source .bashrc


2024/11/02

Ubuntu:Disable Laptop Internal Keyboard

 

I am using Dell Laptop my laptop internal keyboard
acting weird insense it make sound from nowhere &
causing deleteting of folders & files abruptly without
me pressing any keyboard.
Also I observed that when I am typing in some document
it start deleting content,I try to replace it from technician
but still it behaves in same way.
I decided to disable it as I have bluetooth keyboard & mouse that
can do the treak.

Here is How you can disable laptop internal keyboard in Ubuntu,
same method should work whereever their is GRUB.

Edit /etc/default/grub
sudo nano /etc/default/grub

In this opened file look for
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

Now replace this line with following line
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nokbd"

We need to regenerate grub.cfg so that our changes will take effect.
Run
update-grub2

Now you can reboot the laptop your Laptop Internal Keyboard will be disabled
You have not worry of your file & folders getting deleted abruptly & nor your
content of opened text in text editor will get deleted nor some weird keyboard
sound which tell some keyboard key is struck.


Ubuntu:Mounting Window 11 partition

Install Packages required to mount window partition in Ubuntu 24
sudo apt install ntfs-3g fuse3

Make Directory to mount Window Partition C
sudo mkdir -p /mnt/windows_c
Find partition to mount:
sudo fdisk -l
 
 Output:
Disk /dev/loop0: 4 KiB, 4096 bytes, 8 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 55.36 MiB, 58052608 bytes, 113384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 63.7 MiB, 66789376 bytes, 130448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 74.27 MiB, 77881344 bytes, 152112 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop4: 269.77 MiB, 282873856 bytes, 552488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop5: 73.88 MiB, 77463552 bytes, 151296 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop6: 10.72 MiB, 11239424 bytes, 21952 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop7: 91.69 MiB, 96141312 bytes, 187776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: PM9B1 NVMe Samsung 512GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D312188E-7631-40DA-9BB4-EB29F963E1A6

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 411647 409600 200M EFI System
/dev/nvme0n1p2 411648 673791 262144 128M Microsoft reserved
/dev/nvme0n1p3 673792 504090623 503416832 240G Microsoft basic data
/dev/nvme0n1p4 504090624 954650623 450560000 214.8G Linux filesystem
/dev/nvme0n1p5 954650624 956852223 2201600 1G Windows recovery environment
/dev/nvme0n1p6 956852224 997101567 40249344 19.2G Windows recovery environment
/dev/nvme0n1p7 997103616 1000187903 3084288 1.5G Windows recovery environment


Disk /dev/loop8: 505.09 MiB, 529625088 bytes, 1034424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop9: 11.11 MiB, 11649024 bytes, 22752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop10: 301.45 MiB, 316096512 bytes, 617376 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop11: 38.83 MiB, 40714240 bytes, 79520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop12: 10.54 MiB, 11051008 bytes, 21584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop13: 500 KiB, 512000 bytes, 1000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop14: 149.63 MiB, 156901376 bytes, 306448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop16: 568 KiB, 581632 bytes, 1136 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop15: 1.77 MiB, 1855488 bytes, 3624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop17: 150.29 MiB, 157593600 bytes, 307800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

We have identified C: as /dev/nvme0n1p3

sudo mount -t ntfs-3g /dev/nvme0n1p3 /mnt/windows_c
As this failed with following error
NTFS signature is missing.
Failed to mount '/dev/nvme0n1p3': Invalid argument
The device '/dev/nvme0n1p3' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

Lets use other method:
sudo blkid
Output:
/dev/nvme0n1p4: UUID="2be2c7c1-62de-4734-8373-48e0fc083321" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d2d1f772-7ad2-44b8-b4a6-c3651ff97179"
/dev/loop1: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/nvme0n1p7: LABEL="DELLSUPPORT" BLOCK_SIZE="512" UUID="AA46EDC646ED92FD" TYPE="ntfs" PARTUUID="56b859e8-8a4f-43f1-8673-39f46dc6ac4b"
/dev/nvme0n1p5: LABEL="WINRETOOLS" BLOCK_SIZE="512" UUID="AED6479BD64762A7" TYPE="ntfs" PARTUUID="d0944af5-6b66-4830-a766-71376d7b791b"
/dev/nvme0n1p3: TYPE="BitLocker" PARTLABEL="Basic data partition" PARTUUID="9838e955-a1ed-4bb2-b548-ff0bbfee5dea"
/dev/nvme0n1p1: LABEL_FATBOOT="ESP" LABEL="ESP" UUID="9280-D831" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="5c9392f9-e8a7-4bda-a20a-9cbbc370888e"
/dev/nvme0n1p6: LABEL="Image" BLOCK_SIZE="512" UUID="CEF647B2F647999B" TYPE="ntfs" PARTUUID="401fff82-a681-4e84-b2a7-617f3215afd4"
/dev/loop17: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop8: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop15: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop6: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop13: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop4: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop11: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop2: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop0: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop9: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop16: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop7: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop14: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop5: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop12: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop3: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/loop10: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/nvme0n1p2: PARTLABEL="Microsoft reserved partition" PARTUUID="b10249a2-2134-4fc3-ab2b-46a1239d75b8"
Install
sudo apt install dislocker
sudo mkdir /mnt/bitlocker
sudo dislocker -r -V /dev/nvme0n1p3 -- /mnt/bitlocker
The unlocked data will be accessible in the dislocker-file within /mnt/bitlocker:
sudo mount -o loop /mnt/bitlocker/dislocker-file /mnt/windows_c
Automounting Partitions at Boot:
we need to find the UUID of window partition.
Hence
sudo blkid /dev/nvme0n1p3
Output:
/dev/nvme0n1p3: TYPE="BitLocker" PARTLABEL="Basic data partition" PARTUUID="9838e955-a1ed-4bb2-b548-ff0bbfee5dea"


Edit fstab file:
sudo nano /etc/fstab
Add following at end of file:
# Mount the BitLocker partition using dislocker
UUID=9838e955-a1ed-4bb2-b548-ff0bbfee5dea /mnt/bitlocker dislocker defaults 0 0

# Mount the unlocked data
/mnt/bitlocker/dislocker-file /mnt/windows_c ntfs-3g defaults,windows_names,locale=en_US.utf8 0 0
If we prefer that partition is not mounted automatically then we need to update above to following
# Mount the BitLocker partition using dislocker, do not auto-mount
UUID=9838e955-a1ed-4bb2-b548-ff0bbfee5dea /mnt/bitlocker dislocker noauto 0 0

# Mount the unlocked data, do not auto-mount
/mnt/bitlocker/dislocker-file /mnt/windows_c ntfs-3g noauto,defaults,windows_names,locale=en_US.utf8 0 0
But then you have to manually mount & unmount it.

Manually Mounting whenever needed:
sudo dislocker -r -V /dev/nvme0n1p3 -- /mnt/bitlocker
sudo mount -o loop /mnt/bitlocker/dislocker-file /mnt/windows_c
Manually Unmounting at shutdown:
sudo umount /mnt/windows_c
sudo umount /mnt/bitlocker
I will prefer automated mounting approach.
Now I need to gracefully unmount it before shutdown automatically.
So lets have Script to Unmount Partitions
sudo nano /usr/local/bin/unmount_bitlocker.sh
Add following code into it
#!/bin/bash
# Unmount the unlocked data
umount /mnt/windows_c 2>/dev/null
# Unmount the dislocker mount
umount /mnt/bitlocker 2>/dev/null
save & exit.
Make script executable
sudo chmod +x /usr/local/bin/unmount_bitlocker.sh

Create a service to call this script
sudo nano /etc/systemd/system/unmount-bitlocker.service

Add following content to it
[Unit]
Description=Unmount BitLocker Partitions
DefaultDependencies=no
Before=shutdown.target reboot.target halt.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/unmount_bitlocker.sh
RemainAfterExit=yes

[Install]
WantedBy=halt.target reboot.target shutdown.target
Lets enable the service so it runs at shutdown.
sudo systemctl enable unmount-bitlocker.service
Now suppose you modified this service then to make it
work do following

sudo systemctl daemon-reload
sudo systemctl enable unmount-bitlocker.service
Test the Configuration: After making changes, you can test it by running:
sudo mount -a
Now, every time the system is shutting down, your script will
run and unmount the BitLocker partitions automatically & boot
time it will mount the system.

We need to test this setup by shutting down your system and
ensuring the partitions are unmounted as expected.

Automated mounting approach fails so /etc/fstab is modified as

# Mount the BitLocker partition using dislocker, do not auto-mount
UUID=9838e955-a1ed-4bb2-b548-ff0bbfee5dea /mnt/bitlocker dislocker noauto 0 0

# Mount the unlocked data, do not auto-mount
/mnt/bitlocker/dislocker-file /mnt/windows_c ntfs-3g noauto,defaults,windows_names,locale=en_US.utf8 0 0

I am creating script to run after boot:
sudo nano /usr/local/bin/mount_bitlocker.sh
Add following content to it
#!/bin/bash

# Mounting BitLocker-encrypted drive
sudo dislocker -r -V /dev/nvme0n1p3 -- /mnt/bitlocker

# Mount the dislocker file
sudo mount -o loop /mnt/bitlocker/dislocker-file /mnt/windows_c
save & exit.
Make script executable
sudo chmod +x /usr/local/bin/mount_bitlocker.sh

create a service
sudo nano /etc/systemd/system/mount_bitlocker.service

Add Following content to service
[Unit]
Description=Mount BitLocker Encrypted Drive
After=local-fs.target

[Service]
Type=oneshot
User=root
ExecStart=/usr/local/bin/mount_bitlocker.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Save & Exit.
Enable service:
sudo systemctl enable mount_bitlocker.service
Test Script Manually
sudo /usr/local/bin/mount_bitlocker.sh

I checked this approach now its working.

2024/10/31

Ubuntu : How to clean up space from logs ?

Clean accumulated .deb files from /var/cache/apt/archives

Move .deb files to external drive
cd /var/cache/apt/archives
sudo mv *.deb /media/sangram/Elements/Ubuntu22.4LTSAPTArchieve/
ls -lh .

Run autoclean
sudo apt-get autoclean

Truncate System,Kernel & Auth Logs:
sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/auth.log
sudo truncate -s 0 /var/log/kern.log

Remove Apt Cache:
sudo truncate -s 0 /var/log/apt/history.log
sudo truncate -s 0 /var/log/apt/term.log

cd /var/log/apt
sudo rm -rf *.gz
ls -lh /var/log/apt/

Restart rsynclog servicea
sudo systemctl restart rsyslog


Ubuntu : How to Upgrade from Ubuntu 22.04 LTS to Ubuntu 24.04 LTS ?

Make sure your current system is fully up-to-date:
sudo apt update && sudo apt upgrade
sudo apt dist-upgrade
sudo apt autoremove

Run the following command to make sure your upgrade tool is up-to-date:
sudo apt install update-manager-core

Begin the upgrade process by running:
sudo do-release-upgrade

Check for Available Upgrades:
sudo do-release-upgrade -d

2024/10/30

Ubuntu:add-apt-repoitory error - "could not find a distribution template"

Error on adding apt repository

sudo add-apt-repository ppa:forkotov02/ppa

Error :
Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 363, in <module> addaptrepo = AddAptRepository() File "/usr/bin/add-apt-repository", line 41, in __init__
self.distro.get_sources(self.sourceslist) File "/usr/lib/python3/dist-packages/aptsources/distro.py", line 91, in get_sources raise
NoDistroTemplateException( aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Neon/jammy

Actually some files in /etc/*-release got modified when you
tried to install some package in my case i try to add Kde-Neon
repository but not installed Kde Neon.

To rectify this we need to set these two files (lsb-release
& os-release) to Ubuntu default.

For that follow following instructions.

Run
sudo nano /etc/*-release

First File:(lsb-release)

    DISTRIB_ID=ubuntu     DISTRIB_RELEASE=22.04     DISTRIB_CODENAME=jammy     DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS"

Second File:(os-release)
PRETTY_NAME="Ubuntu 22.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
LOGO=start-here


Reboot The Computer

After Reboot try

sudo add-apt-repository ppa:forkotov02/ppa
sudo apt update
sudo apt install qmmp-qt6 qmmp-plugin-pack-

In my case i was able to add apt repository without any
problem after doing above procedure.


Ubuntu/Apt:Trying to overwrite file which is also in package

While doing routine apt upgrade you encountered error

similar to below
dpkg: error processing archive /var/cache/apt/archives/qt6-base_6.7.2-0zneon+22.04+jammy+release+build2_amd64.deb (--unpack):
trying to overwrite '/usr/lib/x86_64-linux-gnu/libQt6Core.so.6', which is also in package libqt6core6:amd64 6.2.4+dfsg-2ubuntu1.1

Complete Error:
Preparing to unpack .../qt6-base_6.7.2-0zneon+22.04+jammy+release+build2_amd64.deb ...
Unpacking qt6-base (6.7.2-0zneon+22.04+jammy+release+build2) ...
dpkg: error processing archive /var/cache/apt/archives/qt6-base_6.7.2-0zneon+22.04+jammy+release+build2_amd64.deb (--unpack):
trying to overwrite '/usr/lib/x86_64-linux-gnu/libQt6Core.so.6', which is also in package libqt6core6:amd64 6.2.4+dfsg-2ubuntu1.1
Preparing to unpack .../qt6-declarative_6.7.2-0zneon+22.04+jammy+release+build3_amd64.deb ...
Unpacking qt6-declarative (6.7.2-0zneon+22.04+jammy+release+build3) ...
Errors were encountered while processing:
/var/cache/apt/archives/qt6-base_6.7.2-0zneon+22.04+jammy+release+build2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Then identify the file & do force overwrite for it like below
sudo dpkg -i --force-overwrite /var/cache/apt/archives/qt6-base_6.7.2-0zneon+22.04+jammy+release+build2_amd64.deb
Then
sudo apt-get -f install

2024/10/28

WSL : Install Linux GUI App on Windows 11

If wsl is not installed  then run    
    wsl --install

If already installed then run
    wsl --update

Need restart of WSL for the update to take effect
    wsl --shutdown


Install Linux GUI App
    sudo apt update

    sudo apt install gnome-text-editor
    sudo apt install gimp
    sudo apt install nautilus
    sudo apt install vlc
    sudo apt install x11-apps


    Install Google Chrome
        cd /tmp
        wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
        sudo apt install --fix-missing ./google-chrome-stable_current_amd64.deb

    Install Microsoft Edge
        sudo apt install software-properties-common apt-transport-https wget
        wget -q https://packages.microsoft.com/keys/microsoft.asc -O- | sudo apt-key add -
        sudo add-apt-repository "deb [arch=amd64] https://packages.microsoft.com/repos/edge stable main"
        sudo apt install microsoft-edge-dev

    Install Node.js
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
        nvm ls
        nvm install --lts

    Suppose You installed VLC as mentioned above,then you can search in normal
    window start menu it has mentioned that "VLC Media Player(Ubuntu)"

    Now open the VLC & download sample mp3 fileto download folder of Windows.

    You can access the Downloaded File from Launched VLC Player in
        /mnt/c/Users/sangram

    How to access WSL File in Window App Like Explorer ?    
       In Explorer below This PC & Network there is Icon
       on Linux from here you can navigate further
       Path can be
            \\wsl.localhost\Ubuntu\home\sangram

2024/10/27

Ubuntu:Key is stored in legacy trusted.gpg keyring

First run

sudo apt update

If you get message saying "Key is stored in legacy
trusted.gpg keyring like below"

Fetched 1,701 kB in 4s (422 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
28 packages can be upgraded. Run 'apt list --upgradable'
to see them.
W: http://archive.neon.kde.org/user/dists/bionic/InRelease:
Key is
stored in legacy trusted.gpg keyring
(/etc/apt/trusted.gpg), see the DEPRECATION
                section in apt-key(8) for details.

Then you can get rid of this message using following
procedure

cd /etc/apt
sudo cp trusted.gpg trusted.gpg.d

Now run
sudo apt update

You will not get "Key is stored in legacy trusted.gpg
keyring like below" message

2024/10/23

Ubuntu :How to install Arduino IDE ?

Download the Installable AppImage file

cd /home/sangram/Applications
wget https://downloads.arduino.cc/arduino-ide/arduino-ide_2.3.3_Linux_64bit.AppImage

Make AppImage Executable

chmod +x arduino-ide_2.3.3_Linux_64bit.AppImage

In Ubuntu 22.04 or higher

sudo add-apt-repository universe
sudo apt install libfuse2


Create /etc/udev/rules.d/99-arduino.rules file
and add following content to it

SUBSYSTEMS=="usb", ATTRS{idVendor}=="2341", GROUP="plugdev", MODE="0666"

Add Launcher

nano ~/.local/share/applications/arduino-ide.desktop

Add Following Content to it

[Desktop Entry]
Version=1.0
Name=Arduino IDE
Exec=/home/sangram/Applications/arduino-ide_2.3.3_Linux_64bit.AppImage
Terminal=false
Icon=/home/sangram/Applications/arduino-ide.png
Type=Application
Categories=Utility;Development;

Clear Icon Cache
gtk-update-icon-cache ~/.icons
gtk-update-icon-cache /usr/share/icons/hicolor

For Gnome:
gsettings reset org.gnome.shell app-picker-layout

2024/10/20

Ubuntu: Troubleshooting Installation of dotnet-sdk-8.0

If you try to install Dotnet 8.0 in Ubuntu you get following
error


The following packages have unmet dependencies: dotnet-host-7.0 :
Conflicts: dotnet-host E: Error, pkgProblemResolver::Resolve generated breaks,
this may be caused by held packages.

Lets Troubleshoot it

Remove all repositaries related to microsoft most of the time you added for mssql & vscode

sudo rm -f /etc/apt/sources.list.d/mssql-release.list
sudo rm /etc/apt/sources.list.d/microsoft-prod.list.save


Now remove all dotnet related installed packages

sudo apt purge dotnet-sdk* dotnet-host* dotnet* aspnetcore* netstandard*

Try installing again

sudo apt update
sudo apt install dotnet-sdk-8.0

You can remove this package as

sudo apt-get remove dotnet-sdk-6.0

You cam upgrade from this package by

sudo apt upgrade dotnet-sdk-8.0

Confirm installed packages
dotnet --info

You can confirm list of SDK
dotnet --list-sdk


List runtimes
dotnet --list-runtimes

Ubuntu : Disable Pro Upgrade Messages

Whenever you try to upgrade ubuntu through command line using
sudo apt upgrade
you see message regarding Ubuntu Pro
e.g.
sangram@sangram-Inspiron-14-5430:/etc/apt/sources.list.d$ sudo apt upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
Get more security updates through Ubuntu Pro with 'esm-apps' enabled:

You dont want to see this message then


1) Activate the Pro repository.
This repository is not public and is free for up to five machines.
You'll need to create an account (email, username, password) to access it,
which provides additional security updates. To proceed,
register at https://ubuntu.com/pro to obtain your personal token,
then execute the following command:

sudo pro attach your-personal-token

This is the recommended approach by Ubuntu.

2) Remove the advertisement.
Run this command to disable the advertisement:

sudo dpkg-divert --divert /etc/apt/apt.conf.d/20apt-esm-hook.conf.bak
--rename --local /etc/apt/apt.conf.d/20apt-esm-hook.conf

This will create a .bak suffix for the configuration file,
effectively disabling it. This method will remain effective
even after future apt upgrades.

To verify that the changes have taken effect, run apt upgrade. If successful,
the additional text should no longer appear.

Let me know if you need any further adjustments!

2024/10/19

Ubuntu: Using Samba to share Apt Cache archives across local network

We will copy apt cache archive and create a folder

that can accessed over network using samba.

create directory inside /home/sangram my username

mkdir -p /home/sangram/cache/apt/archives

Copy all deb files from archive

sudo mv /var/cache/apt/archives/*.deb /home/sangram/cache/apt/archives

Create a user sadmin for purpose of software installtion

Create a New User
sudo adduser --no-create-home sadmin
sudo adduser sadmin

I kept password say "sangram" for user sadmin

You can delete sadmin user by following command

sudo deluser --remove-home sadmin
groupdel sadmin

you can switch user to samin as follows

su sadmin

Add the User to the sudo Group
sudo usermod -aG sudo sadmin

List groups to which sadmin belongs
groups sadmin

Then add sadmin to samba group & set samba password

sudo smbpasswd -a sadmin

Add user sadmin to groups
sudo usermod -aG sambashare sadmin

Change Ownership of /home/sangram/cache/apt/archives folder to sadmin

sudo chown -R sangram:sambashare /home/sangram/cache/
sudo chmod 775 /home/sangram/cache/apt/archives

List users added to samba group
sudo pdbedit -L

Edit smb.conf to make folder /home/sangram/cache/apt/archives
available over network.

sudo nano /etc/samba/smb.conf

Add following to end of file

[SharedAptCacheArchives]
path = /home/sangram/cache/apt/archives
browsable = yes
writable = yes
guest ok = no
read only = no
create mask = 0755
directory mask = 0755
force create mode = 0755

Add following to global section
usershare owner only = false

how to test smb.conf configuration is correct
testparm

Restart samba service:

sudo systemctl restart smbd nmbd

If My Machine Ip Address is 192.168.0.122 then smb://192.168.0.122/SharedAptCacheArchives
will be network url

Mount Shared SharedAptCacheArchives folder

cd /mnt

sudo mkdir SharedAptCacheArchives

sudo chmod 750 /home/sangram
sudo chown sangram:sambashare /home/sangram


sudo chmod -R 755 /mnt/SharedAptCacheArchives
sudo chown -R sangram:sambashare /mnt/SharedAptCacheArchives

sudo mount -t cifs //192.168.0.122/SharedAptCacheArchives /mnt/SharedAptCacheArchives -o sec=ntlmv2,username=sadmin,password=sangram,vers=3.0,uid=$(id -u sadmin),gid=$(id -g sadmin)

If mounted successfully then /mnt/SharedAptCacheArchives will acts as if
this folder contain all debian files in it from /home/sangram/cache/apt/archives

Enable Required Services (troubleshooting):

sudo systemctl enable systemd-networkd-wait-online

sudo systemctl enable networkd-dispatcher.service

sudo systemctl enable systemd-networkd.service

sudo systemctl enable NetworkManager-wait-online.service

systemctl status networkd-dispatcher.service systemd-networkd.service

View Logs of error in mounting at
sudo dmesg | tail -n 20

You can also view mounted filesystems by checking the /proc/mounts file
cat /proc/mounts

Check Folder Permission

cd /home/sangram/cache/apt/archives/
ls *.deb -ld .

To test our setting we will use smbclient as follows

List all shared folderson the ip address (Machine)
smbclient -L //192.168.0.122 -U sadmin

You can check if our setup worked or not by using smbclient.
After login run "ls"command to verify files are there.

smbclient //192.168.0.122/SharedAptCacheArchives -U sadmin
smbclient //192.168.0.122/SharedAptCacheArchives -U sangram

Logs of error while mounting can be viewed by following query:
sudo dmesg | tail -n 20

2024/10/18

Ubuntu : How to use ftp package repositary

In soucres.list mostly all package repositoies are https or
http but we can also use ftp package repositary.

Lets first edit our sources.list

sudo nano /etc/apt/sources.list

add following to it
deb ftp://ftp.ubuntu.com/ubuntu/ focal main universe

save & exit.

But By Default in ubuntu repository need to be https or http

to use ftp with it run following command

echo 'Dir::Bin::Methods::ftp "ftp";' | sudo tee -a /etc/apt/apt.conf.d/99local-ftp

Now run

sudo apt-get update

Now it will use ftp url also.

2024/10/16

MATLAB - Adding missing launcher in UBUNTU

Add Missing Launcher for MatLab in Ubuntu

Launch Terminal

1.download your own icon-

sudo wget
http://upload.wikimedia.org/wikipedia/commons/2/21/
Matlab_Logo.png -O /usr/share/icons/matlab.png

2.create launcher file
sudo touch /usr/share/applications/matlab.desktop

3.edit launcher file

sudo nano /usr/share/applications/matlab.desktop

4.add following into the launcher file.

#!/usr/bin/env xdg-open
[Desktop Entry]
Type=Application
Icon=/usr/share/icons/matlab.png
Name=MATLAB R2014a
Comment=Start MATLAB - The Language of Technical Computing
Exec=/usr/local/MATLAB/R2024b/bin/matlab -desktop
Categories=Development;

Save & lauch matlab from icon.

Lets write simple script to display triangle as follows

Create Following Script in MatLab& Run.

clc
clear all
close all
x=[0,1,2]
y=[0,1,0]
plot(x,y,'Color','blue','LineWidth',1);
area(x,y,'FaceColor','green');


Now save it & run,IF you got low level graphic error then

cd /usr/local/MATLAB/R2024b/bin

now launch matlab as

./matlab -softwareopengl

Now Try to run script if this time script render
triangle then

In Matlab terminal

opengl('save','software')

This will add prefernces to matlab wrt opengl

Exit & launch Matlab from Launcher instead of command line &
recheck if it is rendering Triangle as previously.

2024/05/20

Running Javascript function from Angular component

 


we can run javascript from angular code on certain event.

For that first create a javascript file inside asset folder.

Mine is located at src/assets/js/selectPickerTrigger.js

Inside this js file add following code

function reRenderSelect(){
alert("I M IN")
}

Go to angular.json file search for script tag & add path to your js file here.

"scripts": [
"src/assets/js/selectPickerTrigger.js"
]

Now go to your name.component.ts file

add following to list of import statements.

import { Component, OnInit, ViewChild, ElementRef } from '@angular/core';
declare function reRenderSelect(): any;

Now you can call our reRenderSelect() function from anywhere in component.ts
code as

onCountrySelected(value: string) {
reRenderSelect()
}


Here I am running reRenderSelect() function from a event handler for select
change,you can run it from anywhere in name.component.ts

2024/05/17

Node.js:Getting latitude & longitude based on address.

Recently I was got into situation where I have tables of country,state,
city with

proper foreign key and referenced in other data but I need latitude &
longitude for city.I does not wanted to create new table so I altered City
Table.


My City Altered Table schema in MySQL after modification look like below

CREATE TABLE `City` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`countryId` int unsigned NOT NULL,
`stateId` int unsigned NOT NULL,
`createdAt` datetime NOT NULL,
`updatedAt` datetime NOT NULL,
`deletedAt` datetime DEFAULT NULL,
`latitude` float DEFAULT NULL,
`longitude` float DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `countryId` (`countryId`),
KEY `stateId` (`stateId`),
CONSTRAINT `City_ibfk_5` FOREIGN KEY (`countryId`) REFERENCES `Country` (`id`),
CONSTRAINT `City_ibfk_6` FOREIGN KEY (`stateId`) REFERENCES `State` (`id`)
) ENGINE=InnoDB


Now I left with way to find latitude & longitude for each city ,some data is
available freely online but it is needed to inserted into new table then extract
latitude & longitude for my table.

I came across a API provider by geocode.maps.co that lets you get latitude &
longitude from address. I registered for API Key ,its free plan lets us make
100,000 api calls per month with limit 1 request per second.

Below is code in Node.js to accomplish same.

What I am doing in code.

1) Getting City which does not have latitude & longitude in City table along
with country & state names
2) Making API call to get latitude & longitude based on city name, state
name,country name.
3) Update the City record with its latitude & longitude.

NPM dependencies used in code:
npm i mysql2

Node.js Code:

const mysql = require('mysql2/promise');
const apiKey = '{Your API Key}';

// Function to create a delay
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

async function getCityWithoutLatLong() {
let connection;
try {
connection = await getConnection();
const query = 'SELECT country.id as countryId, country.name
                countryName, state.id as stateId, state.name stateName,
                city.id as cityId, city.name cityName FROM City city
                INNER JOIN Country country ON city.countryId = country.id
                INNER JOIN State state ON city.stateId = state.id
                WHERE city.countryId = 101;';
const [results] = await connection.query(query);

// Process each city sequentially with a delay
for (const element of results) {
const address = `${element.countryName},${element.stateName},
${element.cityName}`;
const apiEndpoint = `https://geocode.maps.co/search?
q=${address}&api_key=${apiKey}`;
console.log(`Calling API with cityId=${element.cityId}`);
await callAPI(element.cityId, apiEndpoint);
await delay(1000); // Wait for 1 second before the next API call
}
} catch (err) {
console.error('Error in getCityWithoutLatLong:', err);
} finally {
if (connection && connection.end) await connection.end();
console.log("Exited Process in function getCityWithoutLatLong");
process.exit();
}
}

// Function to call an API and process the response
async function callAPI(cityId, apiEndpoint) {
try {
const requestOptions = {
method: "GET",
redirect: "follow"
};

const response = await fetch(apiEndpoint, requestOptions);

if (!response.ok) {
throw new Error(`HTTP error! Status: ${response.status}`);
}

const data = await response.json();

const lat = data[0].lat;
const long = data[0].lon;

console.log(`lat=${lat}, long=${long}`);

const updateResult = await updateCityWithLatLong(cityId, lat, long);
console.log("City updated:", updateResult);
} catch (error) {
console.error('Fetch error:', error);
}
}

// Function to update the city with latitude and longitude in the database
async function updateCityWithLatLong(cityId, lat, long) {
let connection;
try {
connection = await getConnection();
const query = `UPDATE City SET latitude = ?, longitude = ? WHERE id = ?`;
const [results] = await connection.query(query, [lat, long, cityId]);
return results;
} catch (err) {
console.error('Database query error:', err);
} finally {
if (connection && connection.end) await connection.end();
}
}

// Function to get a connection to the database
async function getConnection() {
const connection = await mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'sangram#81',
database: 'MatrimonyDb'
});
return connection;
}

getCityWithoutLatLong();


In My Code I Hardcoded Id of Country India i.e. 101.Modify My Code as per your
table structure,You need to replace API KEY in above code.

2024/05/04

Dynamoose & Express.js :CRUD Operations

Today we explore dynamoose a ODM or dynamodb.I am using local dynamodb
instead of cloud instance.

As usual I am creating by base express template using generator.

express --view=ejs dynamoose-express

Now i will create a demo.js in route & mount it in app.js at mountpoint 'demo'

Content of demo.js

var express = require("express");
var router = express.Router();
var User = require("../models/userModel");

/* GET home page. */
router.post("/createUser", async function (req, res, next) {
try {
// Use the model
const user = new User({
name: req.body.name,
email: req.body.email,
age: req.body.age,
});

// Save the user to DynamoDB
let savedUser = await user.save({ overwrite: false });
savedUser.createdAt = new Date(savedUser.createdAt)

res.json({
success: true,
message: "User created Successully",
data: savedUser,
});
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//get record by email
router.get("/getByEmail/:email", async function (req, res, next) {
try {
let user = await User.query("email").eq(req.params.email).exec();
res.json({ success: true, message: "User found", data: user });
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//get record by user id
router.get("/getByUserId/:userId", async function (req, res, next) {
try {
let user = await User.query("userId").eq(req.params.userId).exec();
res.json({ success: true, message: "User found", data: user });
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//get all records
router.get("/getAll", async function (req, res, next) {
try {
const users = await User.scan().exec();
res.json({ success: true, message: "User found", data: users });
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//greater than
router.get("/youngerThan/:age", async function (req, res, next) {
try {
const users = await User.scan()
.filter("age")
.gt(parseInt(req.params.age))
.exec();
res.json({ success: true, message: "User found Successully", data: users });
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//delete
router.delete("/:userId", async (req, res, next) => {
try {
let myUser = await User.query("userId").eq(req.params.userId).exec();
if (myUser.length) {
await myUser[0].delete(); // Delete the user with the specified userId
res.json({ success: true, message: "User deleted successfully" });
} else {
res.json({ success: false, message: "User not found" });
}
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

//update
router.put("/:userId", async (req, res, next) => {
try {
let myUser = new User({
name: req.body.name,
email: req.body.email,
age: req.body.age,
userId: req.params.userId,
updatedAt: Date.now(),
});

myUser = await myUser.save({ overwrite: true });

myUser.updatedAt = new Date(myUser.updatedAt)
myUser.createdAt = new Date(myUser.createdAt)

res.json({
success: true,
message: "User saved successfully",
data: myUser,
});
} catch (exp) {
res.json({ success: false, message: exp.message });
}
});

module.exports = router;

Changes required in app.js

var demoRouter = require('./routes/demo');
&
app.use('/demo', demoRouter);

Now lets install required npm packages

Please check the packages to install from package.json below

{
"name": "dynamoose-express",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"dotenv": "^16.4.5",
"dynamoose": "^4.0.1",
"ejs": "~2.6.1",
"express": "^4.19.2",
"http-errors": "~1.6.3",
"morgan": "~1.9.1",
"uuid": "^9.0.1"
}
}

Now add connection.js at root

const dynamoose = require('dynamoose');

dynamoose.aws.ddb.local("http://localhost:8000")
module.exports = dynamoose


create a models folder and add userModel.js into it as

const dynamoose = require("../connection");
const { v4: uuidv4 } = require('uuid');
const UserSchema = new dynamoose.Schema(
{
userId: {
type: String,
hashKey: true,
default:uuidv4()
},
email: {
type: String,
index: {
name: "EmailIndex",
global: true,
rangeKey: "userId",
},
},
name: {
type: String,
},
age: Number,
createdAt: {
type: Date,
default: Date.now,
},
updatedAt: {
type: Date,
default: null,
},
},
{
throughput: "ON_DEMAND", // or { read: 5, write: 5 }
}
);

const User = dynamoose.model("User", UserSchema);
module.exports = User;

My .env file is as below

AWS_ACCESS_KEY_ID = "fakeMyKeyId"
AWS_SECRET_ACCESS_KEY = "fakeSecretAccessKey"
AWS_REGION = "fakeRegion"

You can run npm install meanwhile if npm packages not yet installed.

Now to check & rest endpoint created run npm start.

Endpoint:

For creation of User:

curl --location 'http://localhost:3000/demo/createUser' \
--header 'Content-Type: application/json' \
--data-raw '{
"name":"Vijay Desai",
"email":"Vijay@gmail.com",
"age":24
}'

Output:
{
"success": true,
"message": "User created Successully",
"data": {
"name": "Vijay Desai",
"email": "Vijay@gmail.com",
"age": 24,
"userId": "fda3f7e2-c835-45ff-853c-b93f8b26cb93",
"createdAt": "2024-05-04T13:28:45.554Z"
}
}

For Updation of Existing User.

curl --location --request PUT 'http://localhost:3000/demo/2' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "Sangram Desai",
"email": "sangram@gmail.com",
"age": 43
}'

Output:
{
"success": true,
"message": "User saved successfully",
"data": {
"name": "Sangram Desai",
"email": "sangram@gmail.com",
"age": 43,
"userId": "2",
"updatedAt": "2024-05-04T13:31:30.480Z",
"createdAt": "2024-05-04T13:31:30.483Z"
}
}

For viewing list o all users:
curl --location 'http://localhost:3000/demo/getAll'

For getting user by its userId:

curl --location 'http://localhost:3000/demo/getByUserId/fda3f7e2-c835-45ff-853c-b93f8b26cb93'

There are some other endpoint that you can explore.

The complete code of this project is available at
https://github.com/gitsangramdesai/dynamoose-express.

In DynamoDb there is no inbuild functionlity that is similar to auto-increment id in mysql,
people usually use uuid's for primary key.primary key in Dynamodb is made up of partition key
& sort key.Records with same partition key will be saved in one partition & sorted by
sort key(range key).You can also define primary key without sort key(range key)

AWS DynamoDB: How to install locally in Ubuntu?

It is possible to install dynamodb locally,lets explore how.

First Download jar file

wget https://d1ni2b6xgvw0s0.cloudfront.net/v2.x/dynamodb_local_latest.tar.gz

Now extract the the archieve.

Rename extracted folder as dynamodb

Copy it to location where you want to install binary

mv dynamodb /usr/share/

Now on command line go to folder in which dynamodb is copied,
in this case /usr/share/dynamodb.

cd /usr/share/dynamodb

Test i it runs from terminal by running this

sudo java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

Now if you want to test the server each time you would have to run this command
To keep it running each time without running manually we need a service.

To create a service

sudo nano /etc/systemd/system/dynamodb.service

add ollowing to it

[Unit]
Description=DynamoDB Service
[Service]
User=root
WorkingDirectory=/usr/share/dynamodb
ExecStart=/usr/share/dynamodb/dynamodb.sh
SuccessExitStatus=143
TimeoutStopSec=10
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

save & exit


Now we will create a shell script file referred in service above.

cd /usr/share/dynamodb
nano dynamodb.sh

add

#!/bin/sh
sudo java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb


save and exit.

Make shell script executable file

chmod u+x dynamodb.sh

Now let system know that we have created a new service

sudo systemctl daemon-reload
sudo systemctl enable dynamodb
sudo systemctl start dynamodb
sudo systemctl status dynamodb


output of last command in my case is like below

● dynamodb.service - Dynamo DB Local Service
Loaded: loaded (/etc/systemd/system/dynamodb.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-05-04 11:13:52 IST; 11min ago
Main PID: 33499 (dynamodb.sh)
Tasks: 41 (limit: 18708)
Memory: 150.8M
CPU: 4.333s
CGroup: /system.slice/dynamodb.service
├─33499 /bin/sh /usr/share/dynamodb/dynamodb.sh
├─33500 sudo java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
└─33501 java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

May 04 11:13:52 sangram-Inspiron-14-5430 sudo[33500]: root : PWD=/usr/share/dynamodb ; USER=root ; COMMAND=/usr/bin/java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
May 04 11:13:52 sangram-Inspiron-14-5430 sudo[33500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: Initializing DynamoDB Local with the following configuration:
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: Port: 8000
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: InMemory: false
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: Version: 2.4.0
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: DbPath: null
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: SharedDb: true
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: shouldDelayTransientStatuses: false
May 04 11:13:53 sangram-Inspiron-14-5430 dynamodb.sh[33501]: CorsParams: null


Now we need to install awscli on ubuntu as ollows

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"
-o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install


Now run aws configure
and give below response to when prompted.

AWS Access Key ID [****************yId"]: fakeMyKeyId
AWS Secret Access Key [****************Key"]: fakeSecretAccessKey
Default region name ["fakeRegion"]: fakeRegion
Default output format [None]:


The response need to be given as given above.


Now you can check lists o tables in dynamodb by running follwoing command

aws dynamodb list-tables --endpoint-url http://localhost:8000

You may also like to check how to do CRUD oprtion in dynamoDb using exprress
https://msdotnetbuddy.blogspot.com/2023/05/working-with-dynamo-db.html.

References:
https://medium.com/aws-lambda-serverless-developer-guide-with-hands/
amazon-dynamodb-primary-key-partition-key-and-sort-key-how-to-choose-right-
key-for-dynamodb-ea5673cb87c0

2024/04/30

Sequelize-Postgres-Express:Upload file to database

Today we will explore how to upload an file (image) into

mysql database using sequelize.

Lets first create a express application using express generator.

express --view=ejs express-postgres-upload-file

Here is my package.json whivh you can check to install required npm
packages,

{
"name": "express-mysql-upload-file",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"dotenv": "^16.4.5",
"ejs": "~2.6.1",
"express": "~4.16.1",
"http-errors": "~1.6.3",
"morgan": "~1.9.1",
"multer": "^1.4.5-lts.1",
"pg": "^8.11.5",
"sequelize": "^6.37.3"
}
}

Here we are installing multer,dotenv,sequelize,mysql2 packages.

Run npm i.

create .env file in root folder with content

PG_USER=sangram
PG_PASSWORD="sangram#81"
PG_PORT=5432
PG_DATABASE=playground
PG_SERVER=localhost


Now add upload.js in root folder with ollowing content.

var multer = require("multer");
var storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, "./public/uploads/profile_pic/");
},
filename: function (req, file, cb) {
var fileparts = file.originalname.split(".");
var ext = fileparts[fileparts.length - 1];
cb(null, file.fieldname + "-" + Date.now() + "." + ext);
},
});

var upload = multer({ storage: storage });

module.exports = upload;

create models folder in root location & add image.js in it.

module.exports = function (sequelize, DataTypes) {
const Image = sequelize.define('image', {
imageId: {
type: DataTypes.INTEGER,
autoIncrement: true,
primaryKey: true
},
mimeType: {
type: DataTypes.STRING,
},
fileName: {
type: DataTypes.STRING,
field: 'name'
},
data: {
type: DataTypes.BLOB("long"),
}
}, {
freezeTableName: true
});

return Image;
}

Now create index.js inside model folder with following content

let { sequelize, Sequelize } = require("../connection.js");

let db = {};
db.Images = require("./image.js")(Sequelize,sequelize);

db.Sequelize = Sequelize;
db.sequelize = sequelize;
module.exports = db;

Now add uploads folder in public folder & inside uploads folder
add profile_pic folder.

Now create demo.js inside rotes folder with following content.

var express = require("express");
var router = express.Router();
var db = require("../models");
var upload = require("../uploads");
var fs = require("fs");
var path = require("path");

router.post("/", upload.single("image"), async (req, res, next) => {
try {
let imageCreated = await db.Images.create({
mimeType: req.file.mimetype,
fileName: req.file.filename,
data: fs.readFileSync(
path.join("./public/uploads/profile_pic/" + req.file.filename)
),
});
res.json({
success: true,
message: "File Uploaded to Mysql Successfully",
data: imageCreated,
});
} catch (exp) {
res.json({ success: false, message: exp.message.toString() });
}
});


router.get("/:fileId", async (req, res) => {
try {
var imageFound = await db.Images.findOne({
where: { imageId: req.params.fileId },
});
var buffer = imageFound.data;
var mimeType = imageFound.mimeType;

res.contentType(mimeType);
res.send(buffer);
} catch (exp) {
res.json({ success: false, message: exp.message.toString() });
}
});

module.exports = router;

Inside app.js add

var demoRouter = require('./routes/demo');

and

app.use('/demo', demoRouter);

in suitable location.

Now we are ready to run our application.Usually testing can be done using
postman.

For uploading image

curl --location 'http://localhost:3000/demo' \
--form 'image=@"/home/sangram/Pictures/Photo.jpg"'

Output:
{
"success": true,
"message": "File Uploaded to Mysql Successfully",
"data": {
"imageId": 1,
"mimeType": "image/jpeg",
"fileName": "image-1714485923005.jpg",
"data":{contain binary data},
"updatedAt": "2024-04-30T13:07:35.512Z",
"createdAt": "2024-04-30T13:07:35.512Z"
}
}

Please notice imageId that we are going to use in next api call.

curl --location 'http://localhost:3000/demo/1'

This will output image uploaded in previous step.

The complete code of this application can be found at

https://github.com/gitsangramdesai/express-sequelize-pg-upload-file