How to easily encrypt/decrypt a file in Linux with gpg

June 29th, 2022 Comments off

No matter what you’re doing on your computer, you need to do so with an eye to security — that means using strong passwords, storing files in safe locations, and in some cases encrypting files. Fortunately, for nearly every usage, there are tools to enable you to encrypt your data…from transferring data online to storing data on a locally attached storage, even encrypting your entire drive.

Since gpg is built into almost every Linux system, you won’t have to install anything to get this working from the command line. I’ll also show how to gain this functionality within the Nautilus (GNOME Files) file manager tool.

From the command line

Let’s say you have a file, /home/user/test.txt, that you want to password protect. Using gpg, you would do the following.

  1. Open a terminal window.
  2. Change to the /home/user/ directory with the command cd /home/user/
  3. Encrypt the file with the command gpg -c test.txt.
  4. Enter a unique password for the file and hit Enter.
  5. Verify the newly typed password by typing it again and hitting Enter.

You should now see the file test.txt.gpg in the /home/user folder. To decrypt that file, do the following.

  1. Open a terminal window.
  2. Change to the /home/user directory with the command cd /home/user.
  3. Decrypt the file with the command gpg important.dox.gpg.
  4. When prompted, enter the decryption password you created when encrypting the file.

You could send that file to a recipient and, as long as they have gpg installed, they can decrypt the file with the password you used for encryption. If they are a Windows user, they can always install Gpg4win.

The GUI way

If you happen to be a GNOME 3 user (or any Linux desktop that makes use of either Nautilus or GNOME Files), you can add a contextual menu entry to the file manager for encryption. Here’s how (I’ll demonstrate it on Ubuntu GNOME 16.04).

  1. Open a terminal window.
  2. Issue the command sudo apt-get install seahorse-nautilus.
  3. Type your sudo password and hit Enter.
  4. If prompted, type y and hit Enter.
  5. Allow the installation to complete.

Open the file manager and navigate to the /home/user directory. Right-click the test.text file and then click the Encrypt… entry. You will be prompted to enter and then verify an encryption password. Once you’ve verified the password, the test.txt.gpg file will appear in

The decryption process is the same.

  1. Open the file manager.
  2. Navigate to the encrypted file.
  3. Right-click the encrypted file.
  4. Click Open with Decrypt File.
  5. When prompted, give the new file a name and click Enter.
  6. When prompted, enter the decryption password and click Enter.

The encrypted file will now be decrypted and ready to use.

How to fix 502 Bad Gateway | Cloudflare and Nginx [Engintron]

May 13th, 2022 Comments off

When you want to point cloudflare nameservers to your working website with engintron you will have a technical issue with linking cloudflare with engintron.

Our technical support / server administrators at subwayhost worked to fix the issue after the request of many client’s to add cloudflare to our services.

So we have gone directly and fixed it.

and we decided to create a tutorial for all people and companies to see how they can fix it.


  1. Login in WHM
  2. Select Engintron for cPanel/WHM
  3. Select Edit your custom rule
  4. Uncomment set $PROXY_DOMAIN_OR_IP
  5. Add you EXTERNAL IP address (or INTERNAL IP address if you behind firewall and you like use server only for internal network)
  6. It will look like.
 set $PROXY_DOMAIN_OR_IP "X.X.X.X"; # Use your cPanel's shared IP address here

Replace x.x.x.x with your server IP address.

Some useful commands for Account migrations in Cpanel server

April 26th, 2022 Comments off

Pre migration steps (DNS)

rsync -avHl /var/named/ /home/named.backup/
sed -i -e "s/14400/600/" /var/named/*.db
newserial=$(date +%Y%m%d%H)
sed -i -e "s/[0-9]\{10\}/$newserial/" /var/named/*.db
rndc reload

For customers with a large number of domains you can use the find command.

cd /var/named
find . -name "*.db" -exec sed -i -e "s/TTL\ 14400/TTL\ 600/" {} \;
newserial=$(date +%Y%m%d%H)
find . -name "*.db" -exec sed -i -e "s/[0-9]\{10\}/$newserial/" {} \;
rndc reload

Shared accounts

On our name servers it is best to create a text file with the list of domains.

for domain in `cat domains.txt `; do sed -i -e "s/TTL\ 20h/TTL\ 600/" /var/named/$domain.db; done
newserial=$(date +%Y%m%d%H)
for domain in `cat domains.txt `; do sed -i -e "s/[0-9]\{10\}/$newserial/" /var/named/$domain.db; done
for domain in `cat domains.txt `; do sudo /usr/sbin/rndc reload $domain; done

Set up ssh key

ssh-keygen -t rsa
cat /root/.ssh/ | ssh root<newhost> 'read key ; mkdir -p ~/.ssh ; echo "$key" >> ~/.ssh/authorized_keys'

Package Accounts

for i in $(/bin/ls -A /var/cpanel/users/);do /scripts/pkgacct $i /home/temp; done

To skip home dirs:

for i in $(/bin/ls -A /var/cpanel/users/);do /scripts/pkgacct --skiphomedir $i; done

Add “–skipacctdb” to skip databases.

To split the packaging process run this:

for i in $(/bin/ls -A /var/cpanel/users/[a-j]*| cut -d "/" -f 5);do /scripts/pkgacct $i; done
for i in $(/bin/ls -A /var/cpanel/users/[k-z]*| cut -d "/" -f 5);do /scripts/pkgacct $i; done

Migrate only certain accounts:

while read domain; do ACCT=$(grep -l DNS=$domain /var/cpanel/users/*); /scripts/pkgacct `basename $ACCT`; done < domains_to_move.txt
while read domain; do ACCT=$(grep -l DNS=$domain /var/cpanel/users/*); echo $domain `basename $ACCT`; done < domains_to_move.txt

FTP files:

ncftpget -R -u user -p pass host_name . public_html/
wget -c -r -nH
lftp: set ftp:ssl-allow no
mirror . .

Restore Accounts

Note: It is generally advised to run easyapache before restoring the accounts.

cd /home
for x in $(/bin/ls -A *.tar.gz | cut -d "-" -f 2 | cut -d "." -f 1); do /scripts/restorepkg $x; done

Prep for final rsync

for service in crond atd exim httpd cpanel courier-imap courier-authlib dovecot named pure-ftpd proftpd; do /etc/init.d/$service stop; done

Put up maintenance page:

cd /usr/local/apache/htdocs/

index.html contents:

cat << EOF > index.html

<body style="margin:50px 0px; padding:0px; text-align:center; background: LightGray;">
<div id="content" style="border: 1px solid; width: 500px; margin:0px auto; padding:15px; background: Pink;">
<P class='quote'>This site is currently under maintenance.  Please try again later.</div>

Start up new http server:

python -m SimpleHTTPServer 80

Rsync Account Data

echo "x.x.x.x    oldserver" > /etc/hosts
for acct in $(/bin/ls -A /var/cpanel/users); do rsync -avzHPpl -e "ssh -c arcfour" --delete root@oldserver:/home/$acct/ /home/$acct/; done

ssh oldserver "mysql -Bse 'show databases'" | egrep -v "information_schema|cphulkd|eximstats|leechprotect|tmp|logaholic|modsec|mysql" > dbs.txt
for db in `cat dbs.txt `; do mysql -e "create database $db" 2>/dev/null; done
for db in `cat dbs.txt `; do echo $db && ssh oldserver "mysqldump --opt --skip-lock-tables $db" | mysql $db; done

rsync from a plesk server:
mypass=`ssh oldserver cat /etc/psa/.psa.shadow`
ssh oldserver "mysql -u admin -p'$mypass' -Bse 'show databases'" | egrep -v "information_schema|cphulkd|eximstats|leechprotect|tmp|logaholic|modsec|mysql" > dbs.txt
for db in `cat dbs.txt `; do mysql -e "create database $db" 2>/dev/null; done
for db in `cat dbs.txt `; do echo $db && ssh oldserver "mysqldump --opt --skip-lock-tables -u admin -p'$mypass' $db" | mysql $db; done 

push method:
for acct in $(/bin/ls -A /var/cpanel/users); do rsync -avzHl -e ssh /home/$acct/ root@$newserver:/home/$acct/; done
for db in $(mysql -Bse 'show databases' | egrep -v "information_schema|cphulkd|eximstats|leechprotect|tmp|logaholic|modsec|mysql"); do mysqldump --add-drop-database --databases $db | ssh $newserver "mysql";  done

Update Zone Files

Copy the zone files from the new server to the old server.

cd /var/named
scp *.db oldserver:/var/named/
ssh oldserver
cd /var/named
newserial=$(date +%Y%m%d%H)
sed -i -e "s/[0-9]\{10\}/$newserial/" /var/named/*.db
/etc/init.d/named restart

Creating cgroups in RHEL/CentOS 7

April 21st, 2022 Comments off

cgroups allow for system resources to be limited for certain user’s processes, which are defined in configuration files. This is useful e.g. if you wish to limit a compiler’s maximum memory usage and avoid it grinding the system to a halt.


libcgroup-tools need to be installed

sudo yum list installed libcgroup-tools

If not installed, run

sudo yum install libcgroup-tools -y

Creating cgroups

In RHEL 7, you can list the resource controllers which are mounted by default using


If you are configuring any of the listed controllers, you do not need to mount them in your configuration file.

The default syntax of /etc/cgconfig.conf, the default control group configuration file, is:


group <groupname> {
        [permissions] #optional
        <controller> {
                <param name> = <param value>

If this is your first time configuring control groups on a system, configure the service to start at boot:

sudo systemctl enable cgconfig

Start the service:

sudo systemctl start cgconfig

Once you have created control groups, you need to modify /etc/cgrules.conf, which assigns user processes to control groups:
NB: the <process> parameter is optional

<user>:<process>    <controllers> <controlgroup>

If this is the first time control groups are created on a particular system, configure the service to start at boot:

sudo systemctl enable cgred

Start the service:

sudo systemctl start cgred

Example: Limiting PyCharm memory usage to 40% of total RAM & Swap

Work out 40% of total system memory in kB:

awk '/MemTotal/{printf "%d\n", $2 * 0.4}' < /proc/meminfo

Work out 40% of total system swap in kB:

awk '/SwapTotal/{printf "%d\n", $2 * 0.4}' < /proc/meminfo

Add the two together: 3931969 + 2018507 = 5950476

Create the control group and set memory limits:


group pycharm {
        memory {
                memory.limit_in_bytes = 3931969k;
                memory.memsw.limit_in_bytes = 5950476k;

Start the service:

sudo systemctl start cgconfig

This will create the pycharm cgroup under /sys/fs/cgroup/memory, owned by root as we did not specify any custom permissions:

ls -l /sys/fs/cgroup/memory | grep pycharm
drwxr-xr-x  2 root root 0 Jun  1 08:27 pycharm

Assign all users’ PyCharm processes to the control group:
NB: Find out process’ path if it is not an alias. In this example pycharm ? /usr/local/bin/pycharm ? /opt/pycharm-community-2017.3.4/bin/

*:pycharm   memory  pycharm

Start the service:

sudo systemctl start cgred

Check the cgroups are correctly configured i.e. launching pycharm populates the files in /sys/fs/cgroup/memory/pycharm:

#Before running pycharm
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes
#Launch pycharm process
pycharm &
#Check mem usage file is populated
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes

Example: Stress testing memory in a cgroup

It is a VERY good idea to stress test the defined RAM limits for a new cgroup.

This can be done by adding the stress command to the cgroup:

<user>:stress memory  <groupname>

Confirm that the stress process can reach the cgroup memory limit, e.g. 40% of total RAM:

cat /sys/fs/cgroup/memory/<groupname>/memory.usage_in_bytes
#Start stress process
stress --vm-bytes $(awk '/MemTotal/{printf "%d\n", $2 * (40 / 100);}' < /proc/meminfo)k --vm-keep -m 1
cat /sys/fs/cgroup/memory/pycharm/memory.usage_in_bytes
#Use CTRL+C to stop the stress process

Confirm that the stress process is killed if it tries to use more RAM than permitted e.g. 80% of RAM:

cat /sys/fs/cgroup/memory/<groupname>/memory.usage_in_bytes
#Start stress process and confirm it is killed using sig 9
stress --vm-bytes $(awk '/MemTotal/{printf "%d\n", $2 * (80 / 100);}' < /proc/meminfo)k --vm-keep -m 1
stress: info: [89346] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [89346] (415) <-- worker 89347 got signal 9
stress: WARN: [89346] (417) now reaping child worker processes
stress: FAIL: [89346] (451) failed run completed in 6s
Categories: Cloud Linux, Control Panel, CPanel / WHM Tags:

Tuned – Automatic Performance Tuning of CentOS/RHEL Servers

April 21st, 2022 Comments off

To maximize the end-to-end performance of services, applications and databases on a server, system administrators usually carry out custom performance tunning, using various tools, both generic operating system tools as well as third-party tools. One of the most useful performance tuning tools on CentOS/RHEL/Fedora Linux is Tuned.

Tuned is a powerful daemon for dynamically auto-tuning Linux server performance based on information it gathers from monitoring use of system components, to squeeze maximum performance out of a server.

It does this by tuning system settings dynamically on the fly depending on system activity, using tuning profiles. Tuning profiles include sysctl configs, disk-elevators configs, transparent hugepages, power management options and your custom scripts.

By default tuned will not dynamically adjust system settings, but you can modify how the tuned daemon operates and allow it to dynamically alter settings based on system usage. You can use the tuned-adm command-line tool to manage the daemon once it is running.

On CentOS/RHEL 7 and Fedoratuned comes pre-installed and activated by default, but on older version of CentOS/RHEL 6.x, you need to install it using the following.

# yum install tuned

After the installation, you will find following important tuned configuration files.

  • /etc/tuned – tuned configuration directory.
  • /etc/tuned/tuned-main.conf– tuned mail configuration file.
  • /usr/lib/tuned/ – stores a sub-directory for all tuning profiles.

Now you can start or manage the tuned service using following commands.

--------------- On RHEL/CentOS 7 --------------- 
# systemctl start tuned	        
# systemctl enable tuned	
# systemctl status tuned	
# systemctl stop tuned		

--------------- On RHEL/CentOS 6 ---------------
# service tuned start
# chkconfig tuned on
# service tuned status
# service tuned stop

Now you can control tuned using the tunde-adm tool. There are a number of predefined tuning profiles already included for some common use cases. You can check the current active profile with following command.

# tuned-adm active

From the output of the above command, the test system is optimized for running as a virtual guest.

Check Current Tuned Profile
Check Current Tuned Profile

You can get a list of available tuning profiles using following command.

# tuned-adm list
List Available Tuned Profiles
List Available Tuned Profiles

To switch to any of the available profiles for example throughput-performance – a tuning which results into excellent performance across a variety of common server workloads.

# tuned-adm  profile throughput-performance
# tuned-adm active
Switch to Tuning Profile
Switch to Tuning Profile

To use the recommended profile for your system, run the following command.

# tuned-adm recommend

And you can disable all tuning as shown.

# tuned-adm off

How To Create Custom Tuning Profiles

You can also create new profiles, we will create a new profile called test-performance which will use settings from an existing profile called latency-performance.

Switch into the path which stores sub-directories for all tuning profiles, create a new sub-directory called test-performance for your custom tuning profile there.

# cd /usr/lib/tuned/
# mkdir test-performance

Then create a tuned.conf configuration file in the directory.

# vim test-performance/tuned.conf

Copy and paste the following configuration in the file.

summary=Test profile that uses settings for latency-performance tuning profile

Save the file and close it.

If you run the tuned-adm list command again, the new tuning profile should exist in the list of available profiles.

# tuned-adm list
Check New Tuned Profile
Check New Tuned Profile

To activate new tuned profile, issue following command.

# tuned-adm  profile test-performance
%d bloggers like this: