Microsoft makes a grand scale entry with a feature-rich Windows 11

Windows 11

Microsoft officially made a massive announcement about Windows 11 at a Microsoft event. Often most grand tech releases have been leaked before their official announcement, similarly, it is the same situation for Windows 11 as well.

In the past, Microsoft had made claims that Windows 10 is going to be the ultimate operating system for many years to come. With that been said, Microsoft 10 has been in the game for 6 years and the new update is packed with many amazing features which is part of the reason Microsoft decided to go ahead with Windows 11.

This new OS is big news to many Microsoft-based computers. And, there is a piece of even bigger news for all of you which is that all existing users will be able to get a free update once when it is launched.

Windows 11 features
Source: PCWELT

Windows 11 release date

Microsoft is trying to you one of the biggest gifts for the holiday season as they plan to release Windows 11 somewhere around late November and Christmas.

Windows 11 minimum requirement

This is something that all you readers must have been itching to know. Have a glance at what the minimum requirements are.

Processor1 GHz or faster with 2 or more cores on a compatible 64-bit processor or system on a Chip (SoC)
Memory4 GB Ram
Storage64 GB or larger storage device
System FirmwareUEFI, Secure Boot capable
TPMTrusted Platform Module (TPM) version 2.0
Graphics cardDirectX 12 compatible graphics / WDDM 2.x
Display>9″ with HD Resolution (720p)
Internet connectionMicrosoft account and internet connectivity required for setup for Windows 11 Home
Source: Microsoft

Windows 11 Features to look out for

Windows 11 has many cool features but there are few noticeable features that make Windows 11 shine. These features are, the simplistic UI, a brand new Windows store, Xbox Game Pass, running android apps for the first time, improvement in gaming performance, and enhancements to general performance and multitasking.

Look and feel

The most notable thing about this new UI is that it features a new Start menu and an updated Start button that are centered on the taskbar. It can be said that this new UI is very similar to what Windows 10X had. Windows 10X was a project planned for dual-screen services which Microsoft eventually had to scratch off.

Windows 11 UI
Source: Pocket-lint

Microsoft has not forgotten to include an updated dark and light modes to Windows 11, which are far better than the ones in our current Windows version.

Work-life and tweaks to multitasking

Microsoft always thinks about making your work life easier. To simplify your work life Microsoft has added brand new multitasking features like Snap Layouts to Windows 11. Snap Layouts allows you to arrange multiple windows across the screen side by side, in columns, sections, and more. Also, you can go back to your previously snapped windows from the dock. For example, you can go to your email app and Edge browser windows without having to snap them back to the previous view again.

Snap Layouts
Source: XDA

Usually, Skype is one of the actors in the main cast but this time Microsoft has given more attention to Microsoft Teams. Microsoft teams are now directly integrated into the taskbar allowing you to call your friends, family, and work colleagues. Additionally, Microsoft has very thoughtfully included a universal mute button in the system tray that allows you to mute your microphone across all apps.

When you are a busy person stacked up with a lot of work you then tend to forget what you need to do. As a solution to this Microsoft has included multi-monitor support. What this feature does is that when you reconnect your external monitor Windows 11 remembers the previous positions of the windows that were on that monitor.

Windows store and entry of android apps

The all-new Windows store has been redesigned to look much better than earlier. It has better well-thought content along with better options to manage the shows you bought.

The best thing that happened to the Windows store is that now it can support Android apps. Now, you are not limited to only Windows-based apps instead you have a ton of apps to choose from. The new Windows store includes apps from Adobe Creative Suite and ever-so-famous android apps like TikTok and Instagram.

Windows store is able to host android apps for the first time thanks to the partnership with Amazon and intel. And, Windows 11 will be using Intel Bridge technology to make this amazing feature a reality.

Android apps in Windows store - Windows 11 features
Source: The Verge

General performance

Phew, now you don’t have to worry about Windows updates taking up space. Windows updates are now 40% smaller and efficient as they now happen in the background. What this could also mean is that you can carry out any work without any disturbance whatsoever.

Gaming performance

Gaming has become much more visually pleasing than ever with ‘Auto HDR’. ‘Auto HDR’ uses computer intelligence to max out the visuals of your favorite game.

Xbox Game Pass

Microsoft has been testing a new Xbox app behind the scenes so now thanks to that Windows 11 is integrated with Xbox Game Pass. As a part of the Xbox Game Pass, xCloud is integrated into the Xbox app. This allows you to stream games from the Microsoft cloud.

Xbox Game Pass
Source: Microsoft

The new features are stunning, and when Windows 11 officially gets launched we expect to see more killer features. Let’s keep our fingers crossed that the Corona Pandemic doesn’t get in the way of the launch.

Create Linux RAID1 Mirror Using Mdadm.

 

Firstly you need to install ‘mdadm’ utility on our system, if not installed already

$ yum install mdadm
After installing ‘mdadm’, we will prepare our disks sdc & sdd for RAID configuration with the help of ‘fdisk’

  • Firstly we will prepare /dev/sdc disk for LVM, start by
  • Type ‘n’ for creating a new partition,
  • Next type ‘p’ for creating the primary partition (since this is a new disk, partition number will be 1 )
  • Nest for First cylinder value & last cylinder value, press enter to use default values i.e. full HDD space
  • Type ‘t’ for accessing the partition followed by ‘1’ (partition number)
  • Now, this is the part where we will enter the partition id for creating RAID i.e. ‘fd’. Type ‘fd’ now & press ‘w’ to write changes.
  • The same process is to be followed for /dev/sdc as well. When both disks have been partitioned, we can examine them usingThe same process is to be followed for /dev/sdd as well. When both disks have been partitioned, we can examine them using

    Now that we partitioned both HDDs, we will create RAID array (aka md device) named ‘/dev/md1’ using the following command

    mdadm –create /dev/md1 –level=mirror –raid-devices=2 /dev/sd[c-d]1

    We will now verify our RAID array by running the following command

    $ cat /proc/mdstat/dev/sd[c-d]1
    For complete details regarding the RAID 1 array, we use the following command

    $ mdadm –detail /dev/md1
    RAID array is ready but still can’t be used as we have not assigned it a filesystem & have not mounted it on our system. So we will assign a filesystem first using ‘mkfs’ command

    $ mkfs.ext4 /dev/md1

    & next we will mount it on /data,

    $ mkdir /data
    $ mount /dev/md1 /data

    But this is only a temporary mount & will not survive a reboot. So we will make an entry into /etc/fstab

    $ vi /etc/fstab

    /dev/md1                /data              ext4    defaults         0 0

    Save & exit the file. Our RAID array is now permanently mounted to /data.

    Lastly, we will create backup of the RAID configuration in order to use it further

    $ mdadm -E -s -v >> /etc/mdadm.conf
    $ mdadm –detail –scan –verbose >> /etc/mdadm.conf
    $ cat /etc/mdadm.conf

    Note : I have tested this on AWS  using CentOS it worked well for me

    Thank you,
    Nuwan Vithanage

Website on Plesk server suddenly started to show 500 error

 

How to Fix a WordPress 500 Internal Server Error

Symptoms

  • The website suddenly started to show 500 errors:

    AH10292: Invalid proxy UDS filename (proxy:unix:///var/www/vhosts/system/example.com/php-fpm.sock|fcgi://127.0.0.1:9000/var/www/vhosts/example.com/httpdocs/public/index.php)

  • Apache has been updated recently to a new version:Ubuntu 20.04:

    grep ‘status installed’ /var/log/dpkg.log | grep apache2:amd64
    2021-09-27 12:46:57 status installed apache2:amd64 2.4.41-4ubuntu3.5

    Ubuntu 18.04:

    grep ‘status installed’ /var/log/dpkg.log | grep apache2:amd64
    2021-09-28 06:25:55 status installed apache2:amd64 2.4.29-1ubuntu4.17

Cause

The issue is related to the latest Apache update, which changed the approach in handling UDS URIs used to proxy connections from Apache to PHP-FPM.

Resolution

The Plesk development team is aware of the issue and works on fixing the issue from the Plesk side.

Please “follow” this article to get notified about further instructions about the Apache update. Meanwhile, please apply the following workaround.

For Ubuntu 20.04
  1. Connect to the server using SSH.
  2. Change to superuser:

    sudo su –

  3. Downgrade Apache to previous version:

    export version=”2.4.41-4ubuntu3″; apt-get install apache2=$version apache2-utils=$version apache2-data=$version apache2-bin=$version

  4. Set to “hold” package for updates:

    apt-mark hold apache2

For Ubuntu 18.04
  1. Connect to the server using SSH.
  2. Change to superuser:

    sudo su –

  3. Downgrade Apache to previous version:

    export version=”2.4.29-1ubuntu4″; apt-get install apache2=$version apache2-utils=$version apache2-data=$version apache2-bin=$version

  4. Set to “hold” package for updates:

    apt-mark hold apache2

Optimizing PHP-FPM for High Performance

Optimizing PHP-FPM for High Performance

Optimizing PHP-FPM for Production | The PHP Consulting Company

PHP is everywhere and is arguably the language most widely deployed on the Internet Web.

However, it’s not exactly known for its high-performance capabilities, especially when it comes to highly concurrent systems. And that’s the reason that for such specialized use cases, languages such as Node (yes, I know, it’s not a language), Go and Elixir are taking over.

That said, there’s a LOT you can do to improve the PHP performance on your server. This article focuses on the php-fpm side of things, which is the natural way to configure on your server if you’re using Nginx.

In case you know what php-fpm is, feel free to jump to the section on optimization.

What is PHP-fpm?

Not many developers are interested in the DevOps side of things, and even among those who do, very few know what’s going on under the hood. Interestingly, when the browser sends a request to a server running PHP, it’s not PHP that forms the point of the first contact; instead, it’s the HTTP server, the major ones of which are Apache and Nginx. These “web servers” then have to decide how to connect to PHP, and pass on the request type, data, and headers to it.

request-response-in-php-e1542398057451

The request-response cycle in the case of PHP (Image credit: ProinerTech)

In modern PHP applications, the “find the file” part above is the index.php, which the server is configured to delegate all requests to.

Now, how exactly the webserver connects to PHP has evolved, and this article would explode in length if we were to get into all the nitty-gritty. But roughly speaking, during the time that Apache dominated as the webserver of choice, PHP was a module included inside the server.

So, whenever a request was received, the server would start a new process, which will automatically include PHP, and get it executed. This method was called, short for “PHP as a module.” This approach had its limitations, which Nginx overcame with php-fpm.

In php-fpm the responsibility of managing PHP, processes lie with the PHP program within the server. In other words, the webserver (Nginx, in our case), doesn’t care about where PHP is and how it is loaded, as long as it knows how to send and receive data from it. If you want, you can think of PHP in this case as another server in itself, which manages some child PHP processes for incoming requests (so, we have the request reaching a server, which was received by a server and passed on to a server — pretty crazy! :-P).

If you’ve done any Nginx setups, or even just pried into them, you’ll come across something like this:

location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/run/php/php7.2-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

The line we’re interested in is this: fastcgi_pass unix:/run/php/php7.2-fpm.sock;, which tells Nginx to communicate with the PHP process through the socket named php7.2-fpm.sock. So, for every incoming request, Nginx writes data through this file, and on receiving the output, sends it back to the browser.

Once again, I must emphasize that this isn’t the most complete or most accurate picture of what goes on, but it’s entirely accurate for most DevOps tasks.

With that aside, let’s recap what we’ve learned so far:

  • PHP doesn’t directly receive requests sent by browsers. Web servers like Nginx first intercept these.
  • The web server knows how to connect to the PHP process, and passes on all the request data (literally pastes everything over) to PHP.
  • When PHP is finished doing its part, it sends the response back to the web server, which sends it back to the client (or browser, in most cases).

Or graphically:

php-and-nginx

How PHP and Nginx work together (Image credit: DataDog)

Great so far, but now comes the million-dollar question: what exactly is PHP-FPM?

The “FPM” part in PHP stands for “Fast Process Manager”, which is just a fancy way of saying that the PHP running on a server isn’t a single process, but rather some PHP processes that are spawned, controlled, and killed off by this FPM process manager. It is this process manager that the web server passes the requests to.

The PHP-FPM is an entire rabbit hole in itself, so feel free to explore if you wish, but for our purposes, this much explanation will do.

Why optimize PHP-fpm?

So why worry about all this dance when things are working all right? Why not just leave things as they are.

Ironically, that is precisely the advice I give for most use cases. If your setup is working fine and doesn’t have extraordinary use cases, use the defaults. However, if you’re looking to scale beyond a single machine, then squeezing out the max from one is essential as it can cut down the server bills in half (or even more!).

Another thing to realize is that Nginx was built for handling huge workloads. It’s capable of handling thousands of connections at the same time, but if the same isn’t true of your PHP setup, you’re just going to waste resources as Nginx will have to wait for PHP to finish with the current process and accept the next, conclusively negative any advantages that Nginx was built to provide!

So, with that out of the way, let’s look at what exactly we’d change when trying to optimize php-fpm.

How to optimize PHP-FPM?

The configuration file location for php-fpm may differ on the server, so you’ll need to do some research for locating it. You can use find command if on UNIX. On my Ubuntu, the path is /etc/php/7.2/fpm/php-fpm.conf. The 7.2 is, of course, the version of PHP that I’m running.

Here’s what the first few lines of this file look like:

;;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;

; All relative paths in this configuration file are relative to PHP's install
; prefix (/usr). This prefix can be dynamically changed by using the
; '-p' argument from the command line.

;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;

[global]
; Pid file
; Note: the default prefix is /var
; Default Value: none
pid = /run/php/php7.2-fpm.pid

; Error log file
; If it's set to "syslog", log is sent to syslogd instead of being written
; into a local file.
; Note: the default prefix is /var
; Default Value: log/php-fpm.log
error_log = /var/log/php7.2-fpm.log

A few things should be immediately obvious: the line pid = /run/php/php7.2-fpm.pid tells us which file contains the process id of the php-fpm process.

We also see that /var/log/php7.2-fpm.log is where php-fpm is going to store its logs.

Inside this file, add three more variables like this:

 

emergency_restart_threshold 10
emergency_restart_interval 1m
process_control_timeout 10s

The first two settings are cautionary and are telling the php-fpm process that if ten child processes fail within a minute, the main php-fpm the process should restart itself.

This might not sound robust, but PHP is a short-lived process that does leak memory, so restarting the main process in cases of high failure can solve a lot of problems.

The third option, process_control_timeout, tells the child processes to wait for this much time before executing the signal received from the parent process. This is useful in cases where the child processes are in the middle of something when the parent processes send a KILL signal, for example. With ten seconds on hand, they’ll have a better chance of finishing their tasks and exiting gracefully.

Surprisingly, this isn’t the meat of php-fpm configuration! That’s because for serving web requests, the php-fpm creates a new pool of processes, which will have a separate configuration. In my case, the pool name turned out to be www and the file I wanted to edit was /etc/php/7.2/fpm/pool.d/www.conf.

Let’s see what this file starts like:

; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[www]

; Per pool prefix
; It only applies on the following directives:
; - 'access.log'
; - 'slowlog'
; - 'listen' (unixsocket)
; - 'chroot'
; - 'chdir'
; - 'php_values'
; - 'php_admin_values'
; When not set, the global prefix (or /usr) applies instead.
; Note: This directive can also be relative to the global prefix.
; Default Value: none
;prefix = /path/to/pools/$pool

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
;       will be used.
user = www-data
group = www-data

So, what does “dynamic” here mean? I think the official docs best explain this (I mean, this should already be part of the file you’re editing, but I’ve reproduced it here just in case it isn’t):

; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives. With this process management, there will be
;             always at least 1 children.
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;             pm.start_servers     - the number of children created on startup.
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
;  ondemand - no children are created at startup. Children will be forked when
;             new requests will connect. The following parameter are used:
;             pm.max_children           - the maximum number of children that
;                                         can be alive at the same time.
;             pm.process_idle_timeout   - The number of seconds after which
;                                         an idle process will be killed.
; Note: This value is mandatory.

So, we see that there are three possible values:

  • Static: A fixed number of PHP processes will be maintained no matter what.
  • Dynamic: We get to specify the minimum and maximum number of processes that php-fpm will keep alive at any given point in time.
  • ondemand: Processes are created and destroyed, well, on-demand.

So, how do these settings matter?

In simple terms, if you have a website with low traffic, the “dynamic” setting is a waste of resources most of the time. Assuming that you have pm.min_spare_servers set to 3, three PHP processes will be created and maintained even when there’s no traffic on the website. In such cases, “ondemand” is a better option, letting the system decide when to launch new processes.

On the other hand, websites that handle large amounts of traffic or must respond quickly will get punished in this setting. Creating a new PHP process, making it part of a pool, and monitoring it, is extra overhead that is best avoided.

Using pm = static fixes the number of child processes, letting maximum system resources to be used in serving the requests rather than managing PHP. If you do go this route, beware that it has its guidelines and pitfalls. A rather dense but highly useful article about it is here.

Final words

Since articles on web performance can spark wars or serve to confuse people, I feel that a few words are in order before we close this article. Performance tuning is as much about guesswork and dark arts as it is system knowledge.

Even if you know all the php-fpm settings by heart, success isn’t guaranteed. If you had no clue about the existence of php-fpm, then you don’t need to waste time worrying about it. Just keep doing what you’re already doing and carry on.

At the same time, avoid the end of being a performance junkie. Yes, you can get even better performance by recompiling PHP from scratch and removing all the modules you won’t be using, but this approach isn’t sane enough in production environments. The whole idea of optimizing something is to take a look at whether your needs differ from the defaults (which they seldom do!), and make minor changes as needed.

If you are not ready to spend time optimizing your PHP servers, then you may consider leveraging a reliable platform like Kinsta who takes care of performance optimization and security.

 

Example configuration

 

"/etc/opt/remi/php74/php-fpm.d/swamfbdorg.conf"
pm = dynamic
pm.max_children = 5
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 400

DevOps Skills: What Makes a DevOps Engineer a Great Asset to Any Company

 

Cas Inc - Grow Your Business And Boost Up Sale | Cas Inc

DevOps Skills: What Makes a DevOps Engineer a Great Asset to Any Company

Have you heard about DevOps and how beneficial it is to any IT company? Oh, sure, you have, even your 95-year old Granny has. It is so much spoken about, however, most of the time in the like manner: it’s so important to be DevOps and if you’re not (why you should is often omitted) – say goodbye to the bright future of your business. The first thing to understand here is that DevOps is really important, man. “Like gas for your car?” – you may ask. Nowhere near the gas, nor the engine. It is both important and supplementary, more like a T-belt: a detail that won’t come to your mind in the first place if you were asked about the core of the car. However, once the T-belt is broken, the car simply won’t start.

What are DevOps and a DevOps engineer?

So, what is DevOps? The diversity of the means and objectives that any company sets before its DevOps team is immense, along with DevOps implementation examples. It implies that an industry-wide definition of the notion will be a very general one. DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support (The Agile Admin) seems to be OK. Another definition by the same source completes the one from above: “DevOps is also characterized by operations staff making use many of the same techniques as developers for their systems work”. This is actually true. DevOps is the result of the position’s merge. Literally, a system administrator plus a developer gives a DevOps engineer.

DevOps engineers rarely start their career as DevOps engineers actually. Most often, they pertain to the developer’s circle who caught the trick of system administrating or vice versa, used to be system administrators who got the hang of coding. To a certain degree, the success of the DevOps philosophy is due to the synergy of training, which enables an individual to see the world from multiple perspectives. Thus, a DevOps engineer helps the developers, testers, and IT teams to soften up product delivery through better infrastructure monitoring, ultimate automation, advanced tooling, and efficient workflows. This enables times faster deployments, reduces human errors, and MTTR, among other perks.

Okay, we’re now clear about what DevOps and a DevOps engineer is and even have slightly covered why we need them. To look into the issue in more detail why many would give their right hand to have such a lifesaver (or a team of them) in their offices, we’ll have to study a DevOps engineer’s JD in more depth. What a DevOps engineer should know?

What makes a DevOps engineer a great asset to your company?

Infrastructure automation and monitoring skills

Many feel confused as regards the mission of the DevOps team and see it in automating everything within a DevOps engineer’s reach. Although this statement isn’t 100% true, automation skills are indeed ranked top in the DevOps Institute’s skills report, “Upskilling: Enterprise DevOps Skills Report 2019.” Infrastructure automation goes closely grouped with infrastructure monitoring, and further branch into server provisioning, configuration management, automated builds, code deployments, and actually monitoring.

As a developer, to deal with the issues from above, a DevOps engineer is able to code and uses at least one scripting language (Java or C#, or PowerShell, depending on the OS of choice). Although a DevOps engineer rarely codes from scratch, he/she possesses basic scripting skills to know the how and why of developers’ things.

As a system administrator, a DevOps engineer knows how to operate a bunch of tools to cope with infrastructure-related tasks. Among such DevOps names Ansible, Chef, Puppet, Jenkins, Docker, New Relic, Sensu along with other great tools (listed in the above article, broken by categories) that are typically used in the DevOps environment. Not bad for one employee, hmmm?

Process skills

Although automation skills rule, “automation without process expertise… is all but useless”, states the same reporters. That’s why a successful DevOps engineer has profound knowledge of the entire workflow within a given company to be able to make processes more efficient and fluent (as this is what we need DevOps for). In this connection, understanding of SDLC, experience working with source control models and processes, Agile methodologies add up to the DevOps engineer skills list. These may seem way too simple for you compared to the infrastructure-management skills. Nevertheless, it is exactly what helps DevOps guys “to strategize the entire integration and deployment process” (IntelliPaat). Without the correct workflow, an enterprise stumbles into organizational issues that all too often are attempted to resolve using technical means, which is completely incorrect. If DevOps engineers are system administrators and masters of processes, human (a.k.a. process)-related issues won’t be an issue for the company whatsoever.

Soft skills

This type of skill was also highlighted in the Upskilling: Enterprise DevOps Skills Report. The author’s name collaboration and cooperation, problem-solving, interpersonal skills, and many more, with risk-taking wrapping up the list. This, actually, points to the fact, that a DevOps engineer is a brilliant team player. This quality is often not so essential in the case of a developer. On the contrary, it is urgent in the DevOps universe, judging by the number of individuals and teams that a DevOps engineer has to interact with on a daily basis.

In its turn, Qulix Systems distinguishes the following soft skills expected from a DevOps candidate:

  • Strive for constant perfection and thirst for the new;
  • Curiosity, inner drive to get to the bottom, enthusiasm;
  • Strong work ethic and time management skills;
  • Openness and teamwork skills.

 

DevOps Engineer Skills

A Recap

As a developer, a DevOps engineer should be able to develop new and upgrade and fix the existing software. As a system administrator, he/she needs to master various tools and technologies to automate, configure and monitor operating environments. Finally, as a team player who’s regularly in touch with different people, a DevOps engineer must demonstrate impeccable soft skills. Seems like a DevOps engineer is Jack of all trades and, surprisingly, master of all. What it actually gives to an enterprise, in figures? It is 200 times more frequent deployments, 24 times faster recovery, and 3 times lower change failure rates, according to the “State of DevOps Report” by Puppet.

In short, the era of speedy delivery, efficient troubleshooting, and spectacular networking is synonymous with the era of DevOps. Is there a flip side to this success story? Well, it’s $100,000 in salary, which lands DevOps engineers on top of almost every Highest paying tech jobs list.

Well, if your company already has a DevOps department up and running, surely you’ve seen the mesmerizing effect of them in action. If you are yet to try this IT weapon out, don’t wait too long, and surely don’t be thrown off by the numbers in the above paragraph. Given that the global IT industry is about to reach $5 trillion in 2019, it’s a reasonable, if not a modest price to pay.

How Can Setup Mod Banner

 

What are the Best DevOps Skills?

Here are the steps to redo it if you need to do it on the new server

Generate banner using: http://www.bagill.com/ascii-sig.php
Font: Standard
Width: 80 or 60
Alignment: Left
Click Generate ASCII Signate Take a copy of it
​​SSH into the new server
Create   vi /etc/sshbanner.txt – Past the copy here
vi /etc/ssh/sshd_config
Search  for banner
uncomment  this add the banner path on it
# no default banner path
Banner  /etc/sshbanner.txt

Save file systemctl restart sshd &&   systemctl  status  sshd
logout from the current SSH session and  reconnect  using SSH and verify MOD banner should be visible with new changes

ModSecurity And Mod_evasive For Apache On CentOS 7

Introduction

 

ModSecurity and mod_evasive are free Apache modules that protect your web server from various brute force or (D)DoS attacks, including SQL injection, cross-site scripting, session hijacking, and many others. These modules can be deployed and integrated into your infrastructure without having to modify your internal network.

In this tutorial, I will explain how to install, configure and integrate ModSecurity and mod_evasive with Apache on CentOS 7.

Requirements

  • A server running CentOS v. 7 with Apache installed
  • A static IP Address for your server

Installing ModSecurity And Mod_evasive

First, you will need to install the EPEL yum repository on the server. Run the following command to install and enable the EPEL repository:

sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Now you can install mod_security and mod_evasive by running the following command:

sudo yum --enablerepo=epel install mod_security mod_evasive

After installing these modules, you can verify them by running the following commands:

sudo httpd -M | grep evasive

If mod_evasive is enabled, you will see the following output:

evasive20_module (shared)

To test the mod_security module, run:

sudo httpd -M | grep security

If mod_security is enabled, you will see the following output:

security2_module (shared)

Configure ModSecurity

Now that the installation is complete and verified, you will need to install a Core Rule Set (CRS) to use mod_security. The CRS provides a web server with a set of rules on how to behave under certain conditions. You can download and install the latest OWASP CRS by running the following commands:

sudo mkdir /etc/httpd/crs
sudo cd /etc/httpd/crs
sudo wget https://github.com/SpiderLabs/owasp-modsecurity-crs/tarball/master
sudo tar -xvf master
sudo  mv SpiderLabs-owasp-modsecurity-crs-* owasp-modsecurity-crs

Now go to the installed OWASP CRS directory:

sudo cd  /etc/httpd/crs/owasp-modsecurity-crs/

In the OWASP CRS directory, you will find a sample file with rules modsecurity_crs_10_setup.conf.example. You need to copy its contents into a new file named modsecurity_crs_10_setup.conf.

sudo cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf

Now you need to tell Apache to use this file along with the module. You can do this by editing Apache main configuration file:

sudo nano /etc/httpd/conf/httpd.conf

Add the following lines at the end of the file:

<IfModule security2_module>
    Include /etc/httpd/crs/owasp-modsecurity-crs/modsecurity_crs_10_setup.conf
    Include /etc/httpd/crs/owasp-modsecurity-crs/base_rules/*.conf
</IfModule>

Save and close the file and restart Apache to reflect changes.

sudo apachectl restart

Last, it is a good idea to create your own configuration file within the modsecurity.d directory. You can do this by creating a file named mod_security.conf.

sudo nano /etc/httpd/modsecurity.d/mod_security.conf

Add the following lines:

<IfModule mod_security2.c>
    SecRuleEngine On
    SecRequestBodyAccess On
    SecResponseBodyAccess On 
    SecResponseBodyMimeType text/plain text/html text/xml application/octet-stream 
    SecDataDir /tmp
</IfModule>

Save and close the file and restart Apache to reflect the changes.

sudo apachectl restart

Configure Mod_evasive

The mod_evasive module reads its configuration from which can be easily customized. You don’t need to create a separate configuration file because there are no rules to update during a system upgrade.

The default mod_evasive.conf file has the following directives enabled:

<IfModule mod_evasive20.c>
    DOSHashTableSize    3097
    DOSPageCount        2
    DOSSiteCount        50
    DOSPageInterval     1
    DOSSiteInterval     1
    DOSBlockingPeriod   10
</IfModule>

You can change these values according to the amount and type of traffic that your web server needs to handle.

  • DOSHashTableSize: This directive specifies how mod_evasive keeps track of who’s accessing what. Increasing this number will provide a faster lookup of the sites that the client has visited in the past.
  • DOSPageCount: This directive specifies how many identical requests to a specific URI a visitor can make over the DOSPageInterval interval.
  • DOSSiteCount: This is similar to DOSPageCount, but corresponds to how many requests overall a visitor can make to your site over the DOSSiteInterval interval.
  • DOSBlockingPeriod: If a visitor exceeds the limits set by DOSSPageCount or DOSSiteCount, their IP will be blocked during the DOSBlockingPeriod amount of time. During this interval, they will receive a 403 (Forbidden) error.

One of the most important configuration options you need to change is DOSEmailNotify. If this option is enabled, an email will be sent to the specified email address whenever an IP address is blacklisted.

You can do this by editing the mod_evasive.conf file:

sudo nano  /etc/httpd/conf.d/mod_evasive.conf

Uncomment the DOSEmailNotify line by removing the # in front of the line, and change the email address to your own:

DOSEmailNotify   jdoe@gmail.com

Save and close the file and restart Apache to reflect the changes.

sudo apachectl restart

Note: You need to have a functioning mail server on this server for this email alert to work.

Testing ModSecurity

To test mod_security you can use curl to send HTTP requests to the Apache server. One of the ModSecurity default rules is to reject requests that have a User-Agent of “Nessus”. This is intended to deny information to attackers who use automated scanners.

You can check mod_security by running the following command:

sudo curl -i http://192.168.1.42/ -A Nessus

You should see a 403 Forbidden response, as shown below on this page. ModSecurity has blocked the request because the User-Agent identifies it as a Nessus scan.

HTTP/1.1 403 Forbidden
Date: Tue, 27 Oct 2015 11:08:39 GMT
Server: Apache
X-Frame-Options: SAMEORIGIN
Last-Modified: Thu, 16 Oct 2014 13:20:58 GMT
Accept-Ranges: bytes
Content-Length: 4897
X-XSS-Protection: 1; mode=block
Content-Type: text/html; charset=UTF-8

Testing Mod_evasive

Now it’s time to test to make sure that the mod_evasive module is working. You can do this using the Perl scripttest.pl written by Mod_Evasive developers.

Before running this script, you need to make some changes:

sudo nano /usr/share/doc/mod_evasive-1.10.1/test.pl

Find the line for(0..100) { Replace 100 with 200. Find the line PeerAddr=> "127.0.0.1:80"); Replace 127.0.0.1 with yourserverip (192.168.1.42).

#!/usr/bin/perl
# test.pl: small script to test mod_dosevasive's effectiveness

use IO::Socket;
use strict;

for(0..200) {
  my($response);
  my($SOCKET) = new IO::Socket::INET( Proto   => "tcp",
                                  PeerAddr=> "192.168.1.42:80");
  if (! defined $SOCKET) { die $!; }
  print $SOCKET "GET /?$_ HTTP/1.0\n\n";
  $response = <$SOCKET>;
  print $response;
  close($SOCKET);
}`

Save and exit.

Now, run the script:

sudo  /usr/share/doc/mod_evasive-1.10.1/test.pl

You should see the following output:

HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden</code></pre>

ModEvasive also logs to Syslog when the IP address is blocked. You can check the log file using:

sudo tailf /var/log/messages

You should see the following output:

Oct 26 15:36:42 CentOS-7 mod_evasive[2732]: Blacklisting address 192.168.1.42: possible DoS attack.