HTTP download speed difference in windows vs Linux

HTTP download speed difference in windows 7 vs Linux

I have a strange situation regarding a Windows PC which is showing limited internet transfer speeds for no apparent reason. If I am performing the same test on Linux box then I am getting good speed.

 

Upon intense debugging, I am able to diagnose and find out the root cause of the problem.

It was/is Windows HTTP packet fragmentation that happens locally. Basically its
how windows compile HTTP headers locally so found a fix to it.

We came across some TCP settings which restrict download speed in the windows
box, hence in order to permit download of large files, have modified below
settings:

These were my initial TCP settings

C:\Windows\system32>netsh interface tcp show global

Querying active state...

TCP Global Parameters

----------------------------------------------

Receive-Side Scaling State: disabled

Chimney Offload State : automatic

NetDMA State: enabled

Direct Cache Acess (DCA): disabled

Receive Window Auto-Tuning Level: disabled

Add-On Congestion Control Provider: none

ECN Capability: disabled

RFC 1323 Timestamps : disabled

** The above autotuninglevel setting is the result of Windows Scaling heuristics

overriding any local/policy configuration on at least one profile.

C:\Windows\system32>netsh interface tcp show heuristics

TCP Window Scaling heuristics Parameters

----------------------------------------------

Window Scaling heuristics : enabled

Qualifying Destination Threshold: 3

Profile type unknown: normal

Profile type public : normal

Profile type private: restricted

Profile type domain : normal

 

Thus I did:

 

# disable heuristics

C:\Windows\system32>netsh interface tcp set heuristics wsh=disabled

Ok.

# enable receive-side scaling

C:\Windows\system32>netsh int tcp set global rss=enabled

Ok.

# manually set autotuning profile

C:\Windows\system32>netsh interface tcp set global autotuning=experimental

Ok.

# set congestion provider

C:\Windows\system32>netsh interface tcp set global congestionprovider=ctcp

Ok.

C:\Windows\system32>netsh interface tcp show global

Querying active state...

TCP Global Parameters

----------------------------------------------

Receive-Side Scaling State: enabled

Chimney Offload State : automatic

NetDMA State: enabled

Direct Cache Acess (DCA): disabled

Receive Window Auto-Tuning Level: experimental

Add-On Congestion Control Provider: ctcp

ECN Capability: disabled

RFC 1323 Timestamps : disabled

After changing these settings downloading is fast again, hitting the internet connection's limit.

How to upgrade ubuntu 16.04 to ubuntu 18.04 ?

How to upgrade ubuntu 16.04 to ubuntu 18.04 ?

Check Ubuntu version before upgrading.

lsb_release -a

First, we have to run an update

sudo apt update

Then run upgrade command

sudo apt upgrade

After that dist-upgrade

sudo apt dist-upgrade

Then update core manager

sudo apt install update-manager-core

Then edit below file 

sudo vim /etc/update-manager/release-upgrades

At the end of the file set 

Prompt-lts

Then save this file

Then do a release upgrade

sudo do-release-upgrade -d

Once done, Then restart the machine and check again.

 

Resetting email account password from Command line in cPanel

Resetting email account password from Command line in cPanel

1) Login to the server as root via SSH

2) Run the command “openssl” and you will see this:

[email protected] [~]# openssl
OpenSSL>

3) Now in the OpenSSL prompt give the command : passwd -1 “your_new_email_password”

[email protected] [~]# openssl
OpenSSL> passwd -1 “[email protected]
$1$m4pq941w/j$1KYI5VwHl8C6h9H6ScTFNWy/
OpenSSL> quit

Please note the option in command: passwd -1 “[email protected]”. It is not alphabet “-l”. It is numeric “-1”.
You will get the MD5 encrypted format for your password. Copy it somewhere.

4) Now you need to go to cpanel account’s home directory and then into etc folder.

[email protected] [~]# cd /home/test/etc/test.com
[email protected] [/home/test/etc/test.com]#

5) There you will be seeing some files : passwd, passwd,v , quota, quota,v , shadow, shadow,v .
Here the file we should consider is shadow and shadow,v.

6) If you check out shadow folder, you will see

[email protected] [/home/test/etc/test.com]# cat shadow
test:$6$itlQRsdN/bGoiCB/n/$53X3P/wy.lsS6uds4u7vporiAqdKBnfsF8Zx8b6MXs6/oxM0inzns3lsDfHdXNygq3pdPOFR57ryWHk63A7JJr2r61:15673::::::

Please note that the dark black colored part is the password part. You need to replace it with the MD5 format of your new password which you copied from OpenSSL prompt earlier.

test:$1$m4pq941w/j$1KYI5VwHl8C6h9H6ScTFNWy/:16673::::::

Save and close the file.

If shadow,v file is present then replace the encrypted part same as above. If a file is not present then try login to webmail it will work.

cPanel EasyApache 4 Installing Redis and Redis PHP extension

cPanel EasyApache 4 Installing Redis and Redis PHP extension

Installing the Redis daemon:

for CentOS 6/RHEL 6

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
rpm -ivh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
yum -y install redis --enablerepo=remi --disableplugin=priorities
chkconfig redis on
service redis start

for CentOS 7/RHEL 7

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
yum -y install redis --enablerepo=remi --disableplugin=priorities
systemctl enable redis
systemctl start redis

Installing the Redis PHP extension for all available versions of PHP.
Copy and paste the entire block into SSH, don't do line by line.

for phpver in $(ls -1 /opt/cpanel/ |grep ea-php | sed 's/ea-php//g') ; do
cd ~
wget -O redis.tgz https://pecl.php.net/get/redis
tar -xvf redis.tgz
cd ~/redis* || exit
/opt/cpanel/ea-php"$phpver"/root/usr/bin/phpize
./configure --with-php-config=/opt/cpanel/ea-php"$phpver"/root/usr/bin/php-config
make clean && make install
echo 'extension=redis.so' > /opt/cpanel/ea-php"$phpver"/root/etc/php.d/redis.ini
rm -rf ~/redis*
done

/scripts/restartsrv_httpd
/scripts/restartsrv_apache_php_fpm

All done! Check to make sure the PHP extension is loaded in each version of PHP:
Copy and paste the entire block into SSH, don't do line by line.

for phpver in $(ls -1 /opt/cpanel/ |grep php | sed 's/ea-php//g') ; do
echo "PHP $phpver" ; /opt/cpanel/ea-php$phpver/root/usr/bin/php -i |grep "Redis Support"
done

Output should be:

PHP 55
Redis Support => enabled
PHP 56
Redis Support => enabled
PHP 70
Redis Support => enabled
PHP 71
Redis Support => enabled

command to activate VG on LVM?

Command to activate VG on LVM?

When you create a volume group, by default, it is activated. Sometimes you may need to activate it manually to make the kernel aware of volume groups.

To activate,

# vhchage -ay my_vg_name

To De-activate,

# vgchnage -an my_vg_name

A command to activate the VG in a cluster?

To activate exclusively on one node,

# vgchange -aey my_vg_name

To deactivate exclusively on one node,

# vgchange -aen my_vg_name

To activate only on the local node,

# vgchange -aly my_vg_name
To deactivate only on the local node,
# vgchange -aln my_vg_name

 

 

Optimize MySQL & Apache on cPanel/WHM server

Optimize MySQL & Apache on cPanel/WHM server

On this optimization process, we will go over the Apache core configuration and modules that are part of Apache core. We think that with the correct settings of Apache and MySQL you can get excellent results and the correct level of resource use without installing third-party proxy and cache modules. So let’s start,

 

Apache & PHP

In the first stage we run the Easy Apache and selected the following:

  • Apache Version 2.4+

  • PHP Version 5.6+

  • In step 5 “Exhaustive Options List” select

– Deflate

– Expires

– MPM Worker

After Easy Apache finished go to your WHM » Service Configuration » Apache Configuration » “Global Configuration” and set the values by the level of resources available on your server.

Apache Directive       (From 2GB memory or less and to 12GB+ memory)       

StartServers            4       8       16  
MinSpareServers         4       8       16  
MaxSpareServers         8       16      32  
ServerLimit             128         256         512     
MaxRequestWorkers       150         250         500     
MaxConnectionsPerChild      1000        2500        5000 
Keep-Alive          On      On      On
Keep-Alive Timeout      1       1        1
Max Keep-Alive Requests     30      30      30
Timeout             60      60      60

Now go to WHM » Service Configuration » Apache Configuration » Include Editor » “Pre VirtualHost Include” and allow users minimal cache and data compression to allow the server to work less for the same things by pasting the code below into the text field.

# Cache Control Settings for one hour cache
<FilesMatch ".(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header set Cache-Control "max-age=3600, public"
</FilesMatch>

<FilesMatch ".(xml|txt)$">
Header set Cache-Control "max-age=3600, public, must-revalidate"
</FilesMatch>

<FilesMatch ".(html|htm)$">
Header set Cache-Control "max-age=3600, must-revalidate"
</FilesMatch>

# Mod Deflate performs data compression
<IfModule mod_deflate.c>
<FilesMatch ".(js|css|html|php|xml|jpg|png|gif)$">
SetOutputFilter DEFLATE
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4.0[678] no-gzip
BrowserMatch bMSIE no-gzip
</FilesMatch>
</IfModule>

Go to WHM » Service Configuration » “PHP Configuration Editor” and set the parameters according to your needs:

– memory_limit

– max_execution_time

– max_input_time

 

MySQL

For MySQL you need to update the configuration file that usually in /etc/my.cnf

Best config base on 2 core & 4GB memory MySQL 5.6 \ MariaDB 10:

[mysqld]
    local-infile = 0
    max_connections = 250
    key_buffer = 64M
    myisam_sort_buffer_size = 64M
    join_buffer_size = 1M
    read_buffer_size = 1M
    sort_buffer_size = 2M
    max_heap_table_size = 16M
    table_cache = 5000
    thread_cache_size = 286
    interactive_timeout = 25
    wait_timeout = 7000
    connect_timeout = 15
    max_allowed_packet = 16M
    max_connect_errors = 10
    query_cache_limit = 2M
    query_cache_size = 32M
    query_cache_type = 1
    tmp_table_size = 16M
    open_files_limit=25280

[mysqld_safe]

[mysqldump]
    quick
    max_allowed_packet = 16M
[myisamchk]
    key_buffer = 64M
    sort_buffer = 64M
    read_buffer = 16M
    write_buffer = 16M
[mysqlhotcopy]
    interactive-timeout

Best config base on 8 core & 16GB+ memory (Shared server) MySQL 5.6 \ MariaDB 10:

[mysqld]
local-infile=0
max_connections = 600
max_user_connections=1000
key_buffer_size = 512M
myisam_sort_buffer_size = 64M
read_buffer_size = 1M
table_open_cache = 5000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet = 64M
net_buffer_length = 16384
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 5M
query_cache_size = 128M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65535
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 8
slow_query_log
log-error
external-locking=FALSE
open_files_limit=50000

[mysqld_safe]

[mysqldump]
quick
max_allowed_packet = 16M

[isamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

[myisamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

sort_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K

How to get mail statistics from your postfix mail logs

How to get mail statistics from your postfix mail logs

 

Its an amazing tool and will provide you the following details

  • Total number of:
    • Messages received, delivered, forwarded, deferred, bounced and rejected
    • Bytes in messages received and delivered
    • Sending and Recipient Hosts/Domains
    • Senders and Recipients
    • Optional SMTPD totals for number of connections, number of hosts/domains connecting, average connect time and total connect time
  • Per-Day Traffic Summary (for multi-day logs)
  • Per-Hour Traffic (daily average for multi-day logs)
  • Optional Per-Hour and Per-Day SMTPD connection summaries
  • Sorted in descending order:
    • Recipient Hosts/Domains by message count, including:
      • Number of messages sent to recipient host/domain
      • Number of bytes in messages
      • Number of defers
      • Average delivery delay
      • Maximum delivery delay
    • Sending Hosts/Domains by message and byte count
    • Optional Hosts/Domains SMTPD connection summary
    • Senders by message count
    • Recipients by message count
    • Senders by message size
    • Recipients by message size

    with an option to limit these reports to the top nn.

  • A Semi-Detailed Summary of:
    • Messages deferred
    • Messages bounced
    • Messages rejected
  • Summaries of warnings, fatal errors, and panics
  • Summary of master daemon messages

Installation :-

Installation is very simple , just download the package and unzip

  •  wget http://jimsun.linxnet.com/downloads/pflogsumm-1.1.5.tar.gz
  •  tar -zxf pflogsumm-1.1.5.tar.gz
  • chown root:root pflogsumm-1.1.5

 

Generate the statistics  :-

cat /var/log/maillog | ./pflogsumm.pl
( The above command will generate a detailed statistics as follows )

 

Grand Totals
------------
messages

118 received
319 delivered
1 forwarded
6 deferred (1597 deferrals)
18 bounced
20 rejected (5%)
0 reject warnings
0 held
0 discarded (0%)

5452k bytes received
277987k bytes delivered
76 senders
49 sending hosts/domains
128 recipients
37 recipient hosts/domains

Per-Day Traffic Summary
date received delivered deferred bounced rejected
--------------------------------------------------------------------
Jan 13 2018 51 251 476 14 9
Jan 14 2018 17 16 522 2 5
Jan 15 2018 43 45 527 2 6
Jan 16 2018 7 7 72

Per-Hour Traffic Daily Average
time received delivered deferred bounced rejected
--------------------------------------------------------------------
0000-0100 0 1 19 0 0
0100-0200 1 1 13 0 0
0200-0300 1 1 13 0 0
0300-0400 1 1 19 0 0
0400-0500 1 1 14 0 0
0500-0600 0 0 7 0 0
0600-0700 1 1 13 0 0
0700-0800 1 1 13 0 0
0800-0900 0 0 7 0 0
0900-1000 2 2 14 0 1
1000-1100 5 51 32 3 0
1100-1200 1 1 33 0 0
1200-1300 1 4 14 0 0
1300-1400 2 2 20 0 0
1400-1500 2 2 20 0 0
1500-1600 4 4 14 0 0
1600-1700 1 1 20 0 0
1700-1800 2 2 20 0 1
1800-1900 1 2 14 1 0
1900-2000 1 1 13 0 2
2000-2100 1 1 19 0 0
2100-2200 1 1 19 0 0
2200-2300 1 1 13 0 0
2300-2400 1 1 19 0 1

Can't locate DateTime perl module

If you are getting below error while installing any application on linux then perl date time module is missing.

Can't locate DateTime.
pm in @INC (@INC contains: /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5 .) at gatherbot_en.pl line 13.
BEGIN failed--compilation aborted at gatherbot_en.pl line 13.

You can use below command to install Datetime on server.

yum install perl-DateTime-TimeZone

After that if you are getting below error.

Can't locate Date/Parse.pm in @INC

Then please run below command. It will ask to install some modules then keep on typing yes or hit enter.

cpan install Date::Parse

 

How to change the color of your BASH prompt

How to change the color of your BASH prompt

You can change the color of your BASH prompt to green with this command:

export PS1="\e[0;32m[\[email protected]\h \W]\$ \e[m"

It will change the colour of bash temporarily. To make it permanent then add code in bash_profile page.

vi ~/.bash_profile

and paste above code save the file and you are done.

For other colors please see the attached list:

Color Code
Black 0;30
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Light Color Code
Light Black 1;30
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33
Light Blue 1;34
Light Green 1;32
Light Cyan 1;36
Light Red 1;31
Light Purple 1;35
Light Brown 1;33

Amazon S3 bucket commands

1. Delete an S3 bucket and all its contents with just one command

Sometimes you may end up with a bucket full of hundreds or thousands of files that you no longer need. If you have ever had to delete a substantial number of items in S3, you know that this can be a little time consuming. The following command will  delete a bucket and all of its content including directories:

aws s3 rb s3://bucket-name --force

2. Recursively copy a directory and its subfolders from your PC to Amazon S3

If you have used the S3 Console, at some stage, you’ve probably found yourself having to copy a ton of files to a bucket from your PC. It can be a little clunky at times, especially if you have multiple directory levels that need to be copied. The following AWS CLI command will make the process a little easier, as it will copy a directory and all of its sub folders from your PC to Amazon S3 to a specified region.

aws s3 cp MyFolder s3://bucket-name/Foldername -- recursive

aws s3 sync “My Folder” s3://bukcet-name/“My Folder”

3. Display subsets of all available ec2 images

The following will display all available ec2 images, filtered to include only those built on Ubuntu (assuming, of course, that you’re working from a terminal on a Linux or Mac machine).

aws ec2 describe-images | grep ubuntu

Warning: this may take a few minutes.

4. List users in a different format

Sometimes, depending on the output format you chose as default, when you invoke long lists – like a large set of users – the display format can be a little hard to read. Including the –output parameter with, say, the table argument, will display a nice, easy-to-read table this one time without having to change your default.

aws iam list-users --output table

5.  List the sizes of an S3 bucket and its contents

The following command uses JSON output to list the size of a bucket and the items stored within. This might come in handy when auditing what is taking up all your S3 storage.

aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]"

6. Move S3 bucket to different location

If you need to quickly move an S3 bucket to a different location, then this command just might save you a ton of time.

aws s3 sync s3://oldbucket s3://newbucket --source-region us-west-1 --region us-west-2