10 things to avoid in docker containers

So you finally surrendered to containers and discovered that they solve a lot of problems and have a lot of advantages

  1. First: Containers are immutable – The OS, library versions, configurations, folders, and application are all wrapped inside the container. You guarantee that the same image that was tested in QA will reach the production environment with the same behavior.
  2. Second: Containers are lightweight – The memory footprint of a container is small. Instead of hundreds or thousands of MBs, the container will only allocate the memory for the main process.
  3. Third: Containers are fast – You can start a container as fast as a typical Linux process takes to start. Instead of minutes, you can start a new container in a few seconds.

However, many users are still treating containers just like typical virtual machines and forget that containers have an important characteristic: Containers are disposable.

The mantra around containers:

“Containers are ephemeral”.

RH_Icon_Container_with_App_Flat

This characteristic forces users to change their mindset on how they should handle and manage containers, and I’ll explain what you should NOT do to keep extracting the best benefits of containers:

1) Don’t store data in containers – A container can be stopped, destroyed, or replaced. An application version 1.0 running in the container should be easily replaced by version 1.1 without any impact or loss of data. For that reason, if you need to store data, do it in a volume. In this case, you should also take care if two containers write data on the same volume as it could cause corruption.  Make sure your applications are designed to write to a shared data store.

2) Don’t ship your application in two pieces – As some people see containers like a virtual machines, most of them tend to think that they should deploy their application into existing running containers. That can be true during the development phase where you need to deploy and debug continuously; but for a continuous delivery (CD) pipeline to QA and production, your application should be part of the image. Remember: Containers are immutable.

3) Don’t create large images – A large image will be harder to distribute. Make sure that you have only the required files and libraries to run your application/process. Don’t install unnecessary packages or run “updates” (yum update) that download many files to a new image layer.

UPDATE: There’s another post that better explain this recommendation: “Keep it small: a closer look at Docker image sizing

4) Don’t use a single layer image – To make effective use of the layered filesystem, always create your own base image layer for your OS, another layer for the username definition, another layer for the runtime installation, another layer for the configuration, and finally another layer for your application. It will be easier to recreate, manage, and distribute your image.

5) Don’t create images from running containers – In other terms, don’t use “docker commit” to create an image. This method to create an image is not reproducible and should be completely avoided. Always use a Dockerfile or any other S2I (source-to-image) approach that is totally reproducible, and you can track changes to the Dockerfile if you store it in a source control repository (git).

6) Don’t use only the “latest” tag – The latest tag is just like the “SNAPSHOT” for Maven users. Tags are encouraged because of the layered filesystem nature of containers. You don’t want to have surprises when you build your image some months later and figure out that your application can’t run because a parent layer (FROM in Dockerfile) was replaced by a new version that it’s not backward compatible or because a wrong “latest” version was retrieved from the build cache. The “latest” tag should also be avoided when deploying containers in production as you can’t track what version of the image is running.

7) Don’t run more than one process in a single container – Containers are perfect to run a single process (http daemon, application server, database), but if you have more than a single process, you may have more trouble managing, retrieving logs, and updating the processes individually.

8) Don’t store credentials in the image. Use environment variables – You don’t want to hardcode any username/password in your image. Use the environment variables to retrieve that information from outside the container. A great example of this principle is the Postgres image.

9) Don’t run processes as a root user – “By default docker containers run as root. (…) As docker matures, more secure default options may become available. For now, requiring root is dangerous for others and may not be available in all environments. Your image should use the USER instruction to specify a non-root user for containers to run as”. (From Guidance for Docker Image Authors)

10) Don’t rely on IP addresses – Each container have their own internal IP address and it could change if you start and stop the container. If your application or microservice needs to communicate to another container, use environment variables to pass the proper hostname and port from one container to another.

For more information about containers, visit and register at http://developers.redhat.com/containers/

DevOps – Road Map 2022

Know What is DevOps? - A Beginner's Guide - AIInspired.com

 

This is the 2022 New Year very 1st post of mine. As this new year began I decided to identify which road map I have to follow as DevOps. I had done several google searches and identified several areas that I need to improve.

What Is DevOps?


DevOps is a set of practicestools, and a cultural philosophy that automates and integrates the processes between software development and IT teams. It emphasizes team empowerment, cross-team communication and collaboration, and technology automation.

The DevOps movement began around 2007 when the software development and IT operations communities raised concerns about the traditional software development model, where developers who wrote code worked apart from operations who deployed and supported the code. The term DevOps, a combination of the development and operations of the word, reflects the process of integrating these disciplines into one, continuous process.

How does DevOps work?


A DevOps team includes developers and IT operations working collaboratively throughout the product lifecycle, in order to increase the speed and quality of software deployment. It’s a new way of working, a cultural shift, that has significant implications for teams and the organizations they work for.

Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams merge into a single team where the engineers work across the entire application lifecycle — from development and test to deployment and operations — and have a range of multidisciplinary skills.

DevOps teams use tools to automate and accelerate processes, which helps to increase reliability. A DevOps toolchain helps teams tackle important DevOps fundamentals including continuous integration, continuous delivery, automation, and collaboration.

DevOps values are sometimes applied to teams other than development. When security teams adopt a DevOps approach, security is an active and integrated part of the development process. This is called DevSecOps.

Development + Operation = DevOps

Code Repository  —> Gitlab, Github 

  1. How Developers works
  2. Which git workflow
  3. How an application is configured
  4. Automated Testing
  5. Prepare the Insfratructure

Servers 

  1. Basic Linux Administration
  2. Shell command
  3. Linux File System
  4. Server Management

Networking and Security

  1. Firewall, Proxy Server
  2. Load Balancers
  3. HTTP/HTTPS concepts
  4. IP, DNS Name Resolutions
  5. Containers Technology – Containers have become the de facto software packaging module
  6. Virtualization
  7. CI/CD  Build tools and package manager

Learn at least one Cloud Provider : Ex: AWS, Azure, OCI, Alibaba

AWS

https://aws.amazon.com/training/digital/aws-cloud-practitioner-essentials/

https://explore.skillbuilder.aws/learn/course/external/view/elearning/2045/aws-well-architected

Oracle.

https://learn.oracle.com/ols/learning-path/become-an-oci-foundation-associate/35644/98057

Free Course and Certification – Gain your next generation knowledge

Free Course and Certification – Gain your next-generation knowledge

In this blog post what I’m trying to do is I’m going to let you a few free online line available courses when you follow those you can gain such  good knowledge about industry-leading  technology and  skills

 

Digital Marketing Courses

Digital Marketing Courses

Through the following link, you can become familiar with Oracle Cloud Infrastructure and its use cases

https://learn.oracle.com/ols/course-list/35644

Through the following link, you can become familiar with 3CX VOIP  System

https://www.3cx.com/3cxacademy/registration/

The demand for professionals with Neo4j skills is growing tremendously. Now, you can become one of the first to prove your experience. Tomorrow’s jobs require NoSQL and graph database skills – so invest now to advance your career.

Get Your Certification

Now is the perfect time to show your employer, customers, and colleagues that you are a Neo4j expert. With the Neo4j Certified Professional exam, you certify your existing experience and skills.

Take the Neo4j Certified Professional exam right now, and you’ll be done in less than an hour.

If you pass the certification, you will be rewarded with a t-shirt in a color of your choice.

2021 certified develper tshirt

https://www.coursera.org/learn/agile-atlassian-jira

About this Course

Agile teams use “just enough” planning and an incremental approach to accomplishing the work of the team. Every project and every team uses a custom agile methodology. This course discusses common foundational principles and practices used by agile methodologies, providing the student a flexible set of tools to use in any role (e.g. product owner, scrum master, project manager, team member) on any agile team. This course mainly discusses agile and lean principles, the kanban and scrum agile methodologies, and uses Atlassian Jira Software Cloud as the main software tool to apply hands-on learning of the topics. The course includes instruction on “classic” Jira projects as well as the new “next-gen” Jira projects.

Students can use the free plan of Atlassian Jira Software Cloud to complete the hands-on labs associated with this course. By the time you have completed this course, you will have a strong foundational understanding of agile principles and practices, and strong hands-on experience with Atlassian Jira Software Cloud. You will be the site administrator for your Jira account, so you will be able to perform basic administration tasks on your site. You should be able to continuously configure your Jira project(s) to match your team’s custom agile methodology. You can watch the videos and take the quizzes from your phone if you want, but the hands-on labs using Atlassian Jira Software Cloud require you to have a Windows, Mac or Linux computer. This course tries to be as concise as possible. It will probably take you about 6-12 hours to go through, but your mileage may vary. It is highly encouraged that you apply what you learn to managing personal agile projects after the course is complete.

5 Popular Web Hosting Services Found Vulnerable to Multiple Flaws

best web hosting security
A security researcher has discovered multiple one-click client-side vulnerabilities in the some of the world’s most popular and widely-used web hosting companies that could have put millions of their customers as well as billions of their sites’ visitors at risk of hacking.

Independent researcher and bug-hunter Paulos Yibelo, who shared his new research with The Hacker News, discovered roughly a dozen serious security vulnerabilities in Bluehost, Dreamhost, HostGator, OVH, and iPage, which amounts to roughly seven million domains.

Automatic GitHub Backups

Some of the vulnerabilities are so simple to execute as they require attackers to trick victims into clicking on a simple link or visiting a malicious website to easily take over the accounts of anyone using the affected web hosting providers.

Critical Flaws Reported in Popular Web Hosting Services

Yibelo tested all the below-listed vulnerabilities on all five web hosting platforms and found several account takeover, cross-scripting, and information disclosure vulnerabilities, which he documented on the Website Planet blog.

1. Bluehost—the company owned by Endurance which also owns Hostgator and iPage, and in total, the three hosting providers powers more than 2 million sites around the world. Bluehost was found vulnerable to:
  • Information leakage through cross-origin-resource-sharing (CORS) misconfigurations
  • Account takeover due to improper JSON request validation CSRF
  • A Man-in-the-middle attack can be performed due to improper validation of CORS scheme
  • Cross-site scripting flaw on my.bluehost.com allows account takeover (demonstrated in a proof-of-concept, below)

2. Dreamhost—the hosting provider that powers one million domains was found vulnerable to:

  • Account takeover using cross-site scripting (XSS) flaw

3. HostGator

  • Site-wide CSRF protection bypass allows complete control
  • Multiple CORS misconfigurations leading to information leak and CRLF

4. OVH Hosting—the company that alone powers four million domains around the world was found vulnerable to:

  • CSRF protection bypass
  • API misconfigurations

5. iPage Hosting

  • Account takeover flaw
  • Multiple Content Security Policy (CSP) bypasses

Video Demonstrations

Talking to The Hacker News, Yibelo said he took about an hour on each of the five web hosting platforms on an average to find at least one account takeover-related client-side vulnerability, mostly using the Burp Suite, a web application security testing tool, and Firefox browser plugins.

“They mostly focus on protecting the wrong assets, but most of them have medium security standards for their user profile portals and data exfiltration vulnerability classes. Most of their protections are easily bypassable using lesser-known tricks,” Yibelo told The Hacker News.

Among the affected hosting companies, Yibelo found Bluehost, HostGator and iPage to be the easiest ones to hack into, though he told The Hacker News that HostGator included “multiple layers of security checks (that can be bypassed, but they are there, unlike the other sites).”

Yibelo reported his findings to the affected web hosting providers, all except OVH patched their services before the information went public yesterday. OVH has yet to confirm and response on the researcher’s findings.

 

How to use Restic for backups

Introduction

restic : Powerful Cross-Platform CLI Backup Utility

Restic is a secure and efficient backup client written in the Go language. It can backup local files to a number of different backend repositories such as a local directory, an SFTP server, or an S3-compatible object storage service.

In this tutorial, we will install Restic and initialize a repository on an object storage service. We’ll then back up some files to the repository. Finally, we’ll automate our backups to take hourly snapshots and automatically prune old snapshots when necessary.

Prerequisites

For this tutorial, you need a UNIX-based computer with some files you’d like to back up. Though Restic is available for Mac, Linux, and Windows, the commands and techniques used in this tutorial will only work on MacOS and Linux.

Restic requires a good amount of memory to run, so you should have 1GB or more of RAM to avoid receiving errors.

You will also need to know the following details about your object storage service:

  • Access Key
  • Secret Key
  • Server URL
  • Bucket Name

Installing the Restic Backup Client

Restic is available as a precompiled executable for many platforms. This means we can download a single file and run it, no package manager or dependencies necessary.

To find the right file to download, first, use your web browser to navigate to Restic’s release page on GitHub. You’ll find a list of files under the Downloads header.

For a 64-bit Linux system (the most common server environment) you want the file ending in _linux_amd64.bz2.

For MacOS, look for the file with _darwin_amd64.bz2.

Right-click on the correct file for your system, then choose Copy Link Address (the wording may be slightly different in your browser). This will copy the download URL to your clipboard.

Next, in a terminal session on the computer you’re backing up (if it’s a remote machine you may need to log in via SSH first), make sure you’re in your home directory, then download the file with curl:

cd ~
curl -LO https://github.com/restic/restic/releases/download/v0.7.3/restic_0.7.3_linux_amd64.bz

Unzip the file we downloaded:

bunzip2 restic*

Then copy the file to /usr/local/bin and update its permissions to make it executable. We’ll need to use sudo for these two actions, as a normal user doesn’t have permission to write to /usr/local/bin:

sudo cp restic* /usr/local/bin/restic
sudo chmod a+x /usr/local/bin/restic

Test that the installation was successful by calling the restic command with no arguments:

restic

Some help text should print to your screen. If so, the restic binary has been installed properly. Next, we’ll create a configuration file for Restic, then initialize our object storage repository.

Creating a Configuration File
Restic needs to know our access key, secret key, object storage connection details, and repository password in order to initialize a repository we can then back up to. We are going to make this information available to Restic using environment variables.

Environment variables are bits of information that you can define in your shell, which are passed along to the programs you run. For instance, every program you run on the command line can see your $PWD environment variable, which contains the path of the current directory.

It’s common practice to put sensitive tokens and passwords in environment variables, because specifying them on the command line is not secure. Since we’re going to be automating our backups later on, we’ll save this information in a file where our script can access it.

First, open a file in your home directory:

nano ~/.restic-env

This will open an empty file with the nano text editor. When we’re done, the file will consist of four export commands. These export statements define environment variables and make them available to any programs you run in the future:

.restic-env
export AWS_ACCESS_KEY_ID=”your-access-key”
export AWS_SECRET_ACCESS_KEY=”your-secret-key”
export RESTIC_REPOSITORY=”s3:server-url/bucket-name”
export RESTIC_PASSWORD=”a-strong-password”

The access and secret keys will be provided by your object storage service. You may want to generate a unique set of keys just for Restic, so that access can be easily revoked in case the keys are lost or compromised.

An example RESTIC_REPOSITORY value would be: s3:nyc3.digitaloceanspaces.com/example-bucket. If you need to connect to a server on a non-standard port or over unsecured HTTP-only, include that information in the URL like so s3:http://example-server:3000/example-bucket.

RESTIC_PASSWORD defines a password that Restic will use to encrypt your backups. This encryption happens locally, so you can back up to an untrusted offsite server without worrying about the contents of your files being exposed.

You should choose a strong password here, and copy it somewhere safe for backup. One way to generate a strong random password is to use the openssl command:

openssl rand -base64 24

Output
j8CGOSdz8ibUYK137wtdiD0SJiNroGUp
This outputs a 24-character random string, which you can copy and paste into the configuration file.

Once all the variables are filled out properly, save and close the file.

Initializing the Repository
To load the configuration into our shell environment, we source the file we just created:

source ~/.restic-env

You can check to make sure this worked by printing out one of the variables:

echo $RESTIC_REPOSITORY

Your repository URL should print out. Now we can initialize our repository with the Restic command:

restic init

Output
created restic backend 57f73c1afc at s3:nyc3.digitaloceanspaces.com/example-bucket

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
The repository is now ready to receive backup data. We’ll send that data next.

Backing Up a Directory
Now that our remote object storage repository is initialized, we can push backup data to it. In addition to encryption, Restic does diffing, and de-duplication while backing up. This means that our first backup will be a full backup of all files, and subsequent backups will only have to transmit new files and changes. Additionally, duplicate data will be detected and not written to the backend, which saves space.

Before we back up, if you’re testing things out on a bare system and need some example files to back up, create a simple text file in your home directory:

echo “sharks have no organs for producing sound” >> ~/facts.txt

This will create a facts.txt file. Now back it up, along with the rest of your home directory:

restic backup ~

Output
scan [/home/sammy]
scanned 4 directories, 14 files in 0:00
[0:04] 100.00% 2.558 MiB/s 10.230 MiB / 10.230 MiB 18 / 18 items 0 errors ETA 0:00
duration: 0:04, 2.16MiB/s
snapshot 427696a3 saved
Restic will work for a bit, showing you live status updates along the way, then output the new snapshot’s ID (highlighted above).

Note: If you want to back up a different directory, substitute the ~ above with the path of the directory. You may need to use sudo in front of restic backup if the target directory is not owned by your user. If you need sudo to back up, remember to use it again when restoring the snapshot, otherwise you may get some errors about not being able to properly set permissions.

Next we’ll learn how to find out more information about the snapshots stored in our repository.

Listing Snapshots
To list out the backups stored in the repository, use the snapshots subcommand:

restic snapshots

Output
ID Date Host Tags Directory
———————————————————————-
427696a3 2017-10-23 16:37:17 restic-test /home/sammy
You can see the snapshot ID we received during our first backup, a timestamp for when the snapshot was taken, the hostname, tags, and the directory that was backed up.

Our Tags column is blank, because we didn’t use any in this example. You can add tags to a snapshot by including a –tag flag followed by the tag name. You can specify multiple tags by repeating the –tag option.

Tags can be useful to filter snapshots later on when you’re setting up retention policies, or when searching manually for a particular snapshot to restore.

The Host is included in the listing because you can send snapshots from multiple hosts to a single repository. You’ll need to copy the repository password to each machine. You can also set up multiple passwords for your repository to have more fine-grained access control. You can find out more information about managing repository passwords in the official Restic docs.

Now that we’ve got a snapshot uploaded, and know how to list out our repository contents, we’ll use our snapshot ID to test restoring a backup.

Restoring a Snapshot
We’re going to restore an entire snapshot into a temporary directory to verify that everything is working properly. Use a snapshot ID from the listing in the previous step. We’ll send the restored files to a new directory in /tmp/restore:

restic restore 427696a3 –target /tmp/restore

Output
restoring <Snapshot 427696a3 of [/home/sammy] at 2017-10-23 16:37:17.573706791 +0000 UTC by sammy@restic-test> to /tmp/restore
Change to the directory and list its contents:

cd /tmp/restore
ls

You should see the directory we backed up. In this example it would be the user sammy’s home directory. Enter the restored directory and list out the files inside:

cd sammy
ls

Output
facts.txt restic_0.7.3_linux_amd64
Our facts.txt file is there, along with the restic binary that we extracted at the beginning of the tutorial. Print facts.txt to the screen to make sure it’s what we expected:

cat facts.txt

You should see the shark fact that we put in the file previously. It worked!

Note: If you don’t want to restore all the files in a snapshot, you can use the –include and –exclude options to fine-tune your selection. Read the Restore section of the Restic documentation to find out more.

Now that we know backup and restore is working, let’s automate the creation of new snapshots.

Automating Backups
Restic includes a forget command to help maintain a running archive of snapshots. You can use restic forget –prune to set policies on how many backups to keep daily, hourly, weekly, and so on. Backups that don’t fit the policy will be purged from the repository.

We will use the cron system service to run a backup task every hour. First, open up your user’s crontab:

crontab -e

You may be prompted to choose a text editor. Select your favorite — or nano if you have no opinion — then press ENTER. The default crontab for your user will open up in your text editor. It may have some comments explaining the crontab syntax. At the end of the file, add the following to a new line:

crontab
. . .
42 * * * * . /home/sammy/.restic-env; /usr/local/bin/restic backup -q /home/sammy; /usr/local/bin/restic forget -q –prune –keep-hourly 24 –keep-daily 7

Let’s step through this command. The 42 * * * * defines when cron should run the task. In this case, it will run in the 42nd minute of every hour, day, month, and day of week. For more information on this syntax, read our tutorial How To Use Cron To Automate Tasks.

Next, . /home/sammy/.restic-env; is equivalent to source ~/.restic-env which we ran previously to load our keys and passwords into our shell environment. This has the same effect in our crontab: subsequent commands on this line will have access to this information.

/usr/local/bin/restic backup -q /home/sammy; is our Restic backup command. We use the full path to the restic binary, because the cron service won’t automatically look in /usr/local/bin for commands. Similarly, we spell out the home folder path explicitly with /home/sammy instead of using the ~ shortcut. It’s best to be as explicit as possible when writing a command for cron. We use the -q flag to suppress status output from Restic, since we wont be around to read it.

Finally, /usr/local/bin/restic forget -q –prune –keep-hourly 24 –keep-daily 7 will prune old snapshots that are no longer needed based on the specified retention flags. In this example, we’re keeping 24 hourly snapshots, and 7 daily snapshots. There are also options for weekly, monthly, yearly, and tag-based policies.

When you’ve updated the command to fit your needs, save the file and exit the text editor. The crontab will be installed and activated. After a few hours run restic snapshots again to verify that new snapshots are being uploaded.

Conclusion
In this tutorial, we’ve created a configuration file for Restic with our object storage authentication details, used Restic to initialize a repository, backed up some files, and tested the backup. Finally, we automated the process with cron.

Restic has more flexibility and more features than were discussed here. To learn more about Restic, take a look at their official documentation or main website

Lean About Hackers

 

| Shutterstock

These days, the word “hacker” might elicit the image of someone who is tech- and computer-savvy and hacks into security systems,

hacking is the activity of identifying weaknesses in a computer system or a network to exploit the security to gain access to personal data or business data. An example of computer hacking can be: using a password cracking algorithm to gain access to a computer system.

Computers have become mandatory to run successful businesses. It is not enough to have isolated computers systems; they need to be networked to facilitate communication with external businesses. This exposes them to the outside world and hacking. System hacking means using computers to commit fraudulent acts such as fraud, privacy invasion, stealing corporate/personal data, etc. Cybercrimes cost many organizations millions of dollars every year. Businesses need to protect themselves against such attacks. but in the past, a hacker’s job was something entirely different.

Ethical Hacker (White hat): A security hacker who gains access to systems with a view to fixing the identified weaknesses. They may also perform Penetration Testing and vulnerability assessments.

Cracker (Black hat): A hacker who gains unauthorized access to computer systems for personal gain. The intent is usually to steal corporate data, violate privacy rights, transfer funds from bank accounts, etc.

Grey hat: A hacker who is in between ethical and black hat hackers. He/she breaks into computer systems without authority with a view to identify weaknesses and reveal them to the system owner.

Script kiddies: A non-skilled person who gains access to computer systems using already made tools.

Hacktivist: A hacker who uses hacking to send social, religious, and political, etc. messages. This is usually done by hijacking websites and leaving a message on the hijacked website.

Phreaker: A hacker who identifies and exploits weaknesses in telephones instead of computers.

Multiple EC2 Network Interfaces on Red Hat / CentOS 7

Multiple EC2 Network Interfaces on Red Hat / CentOS 7

Quartet Tech

If you’re not running Amazon Linux with the built-in network interface management tools, adding multiple ENIs on the same subnet can be a confusing experience.  We use this sometimes to run multiple elastic IPs on separate network interfaces so we can bind to them separately.

We recently worked through this with Amazon support and thought we should share a quick overview of how to do this on Red Hat / CentOS 7.

1. Force your default gateway to be eth0

Edit /etc/sysconfig/network and add:

GATEWAYDEV=eth0

Not doing this left the default gateway of the main routing table set to the last interface to be configured, which caused some strange behavior.

2. Configure each additional interface you’ve added

In /etc/sysconfig/network-scripts, create an ifcfg-ethX for each new interface.

Modify:

1. The DEVICE name to match the ENI.

DEVICE=“eth1”
BOOTPROTO=“dhcp”
ONBOOT=“yes”
TYPE=“Ethernet”
USERCTL=“yes”
PEERDNS=“yes”
IPV6INIT=“no”
PERSISTENT_DHCLIENT=“1”

3. Add a custom route for each additional interface

Again in /etc/sysconfig/network-scripts, create a route-ethX file for each interface.

Modify:

1. The device name
2. Increment the table number
3. The gateway to your VPC subnet’s gateway.
4. Change the source IP to the assigned internal network address of the ENI.

default via 10.0.0.1 dev eth1 table 2
10.0.0.0/24 dev eth1 src 10.0.0.10 table 2

4. Add a custom rule for each additional interface

Also in /etc/sysconfig/network-scripts, create a rule-ethX for each interface.

Modify:

1. Increment the table number to match route-ethX
2. Change the IP to the assigned internal network address of the ENI.

from 10.0.0.10/32 table 2

Restart the network service and you should be up and running. You can confirm with “ip rule”:

# ip rule
0: from all lookup local
32764: from 10.0.0.10 lookup 3
32765: from 10.0.0.11 lookup 2
32766: from all lookup main
32767: from all lookup default

Note that Amazon suggested a custom route and rule for eth0, but we found allowing eth0 to use the default main routing table not only worked but was more flexible

Black Markets on the Deep Web

Black Markets on the Deep Web

The Market That Will Sell You A $20,000 Bank Loan For $30

Here are a few things you can buy on the Deep Web you may not
have heard about.
Rare Opsec Manuals for Survivalists
Rare Special Ops Guides
3D Printed Guns and Bullets
Drugs for Cancer
Japanese Manga
Books on How to Overthrow Governments
Top Darknet Markets
There are two types of Blackmarket Drug emporiums on the Deep
Web. Those you trust and those you don’t. Below is a list of some of the
more popular black market outlets. One or two may be gone already by
the time this article is published. In five years all of them may be gone.
The point here isn’t to steer you into any one of these in particular or
even that I want you to buy illegal drugs (I don’t), but to remind you that
none of these sites will last forever just as Google and Microsoft won’t
last forever.
Something will come along to replace the behemoths just as surely
the same as AltaVista, Infoseek, Napster, and Divx were replaced. It can
happen quickly. My grandfather’s motto: Never gamble or invest more
than you can afford to lose.
Agora Marketplace

  1. Never share your card details.
  2. Don’t keep your personal data on your device from which you are surfing the dark web.
  3. Put tape on camera.
  4. Turn off your location.
  5. Don’t use your real name.
  6. Don’t trust any person on the dark web in a random chatbox.
  7. Never open any link given on the dark web.
  8. Surf only for education purposes only.
  9. Never buy anything.
  10. Don’t use your address, personal email, phone number.
  11. Don’t use the same passwords.
  12. Log out from everywhere before using the dark web.
  13. Surf dark web only on the empty computer ( Documents, pictures, videos free hard drive, social media logged-out).

How To Set Up SSH Keys

Introduction

What&#39;s on your key ring? When is it too many? | Wynns Locksmiths Blog

 

The Secure Shell Protocol (or SSH) is a cryptographic network protocol that allows users to securely access a remote computer over an unsecured network.

Though SSH supports password-based authentication, it is generally recommended that you use SSH keys instead. SSH keys are a more secure method of logging into an SSH server, because they are not vulnerable to common brute-force password hacking attacks.

Generating an SSH key pair creates two long strings of characters: a public and a private key. You can place the public key on any server, and then connect to the server using an SSH client that has access to the private key.

When the public and private keys match up, the SSH server grants access without the need for a password. You can increase the security of your key pair even more by protecting the private key with an optional (but highly encouraged) passphrase.

Step 1 — Creating the Key Pair

The first step is to create a key pair on the client machine. This will likely be your local computer. Type the following command into your local command line:

  • ssh-keygen -t ed25519
  • Copy
Output
Generating public/private ed25519 key pair.

You will see a confirmation that the key generation process has begun, and you will be prompted for some information, which we will discuss in the next step.

Capacity Planning for MySQL and MariaDB – Dimensioning Storage Size

Server manufacturers and cloud providers offer different kinds of storage solutions to cater to your database needs. When buying a new server or choosing a cloud instance to run our database, we often ask ourselves – how much disk space should we allocate? As we will find out, the answer is not trivial as there are a number of aspects to consider. Disk space is something that has to be thought of upfront because shrinking and expanding disk space can be a risky operation for a disk-based database.

In this blog post, we are going to look into how to initially size your storage space and then plan for capacity to support the growth of your MySQL or MariaDB database.

How MySQL Utilizes Disk Space

MySQL stores data in files on the hard disk under a specific directory that has the system variable “datadir”. The contents of the datadir will depend on the MySQL server version, and the loaded configuration parameters and server variables (e.g., general_log, slow_query_log, binary log).

The actual storage and retrieval information is dependent on the storage engines. For the MyISAM engine, a table’s indexes are stored in the .MYI file, in the data directory, along with the .MYD and .frm files for the table. For InnoDB engine, the indexes are stored in the tablespace, along with the table. If innodb_file_per_table option is set, the indexes will be in the table’s .ibd file along with the .frm file. For the memory engine, the data are stored in the memory (heap) while the structure is stored in the .frm file on disk. In the upcoming MySQL 8.0, the metadata files (.frm, .par, dp.opt) are removed with the introduction of the new data dictionary schema.

It’s important to note that if you are using InnoDB shared tablespace for storing table data (innodb_file_per_table=OFF), your MySQL physical data size is expected to grow continuously even after you truncate or delete huge rows of data. The only way to reclaim the free space in this configuration is to export, delete the current databases and re-import them back via mysqldump. Thus, it’s important to set innodb_file_per_table=ON if you are concerned about the disk space, so when truncating a table, the space can be reclaimed. Also, with this configuration, a huge DELETE operation won’t free up the disk space unless OPTIMIZE TABLE is executed afterward.

MySQL stores each database in its own directory under the “datadir” path. In addition, log files and other related MySQL files like socket and PID files, by default, will be created under datadir as well. For performance and reliability reason, it is recommended to store MySQL log files on a separate disk or partition – especially the MySQL error log and binary logs.

Database Size Estimation

The basic way of estimating size is to find the growth ratio between two different points in time, and then multiply that with the current database size. Measuring your peak-hours database traffic for this purpose is not the best practice, and does not represent your database usage as a whole. Think about a batch operation or a stored procedure that runs at midnight, or once a week. Your database could potentially grow significantly in the morning, before possibly being shrunk by a housekeeping operation at midnight.

One possible way is to use our backups as the base element for this measurement. Physical backup like Percona Xtrabackup, MariaDB Backup and filesystem snapshot would produce a more accurate representation of your database size as compared to logical backup, since it contains the binary copy of the database and indexes. Logical backup like mysqldump only stores SQL statements that can be executed to reproduce the original database object definitions and table data. Nevertheless, you can still come out with a good growth ratio by comparing mysqldump backups.

We can use the following formula to estimate the database size:

Where,

  • Bn – Current week full backup size,
  • Bn-1 – Previous week full backup size,
  • Dbdata – Total database data size,
  • Dbindex – Total database index size,
  • 52 – Number of weeks in a year,
  • Y – Year.

The total database size (data and indexes) in MB can be calculated by using the following statements:

1
2
3
4
5
6
mysql> SELECT ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) "DB Size in MB" FROM information_schema.tables;
+---------------+
| DB Size in MB |
+---------------+
|       2013.41 |
+---------------+

The above equation can be modified if you would like to use the monthly backups instead. Change the constant value of 52 to 12 (12 months in a year) and you are good to go.

Also, don’t forget to account for innodb_log_file_size x 2, innodb_data_file_path and for Galera Cluster, add gcache.size value.

Binary Logs Size Estimation

Binary logs are generated by the MySQL master for replication and point-in-time recovery purposes. It is a set of log files that contain information about data modifications made on the MySQL server. The size of the binary logs depends on the number of write operations and the binary log format – STATEMENT, ROW or MIXED. Statement-based binary log are usually much smaller as compared to row-based binary log, because it only consists of the write statements while the row-based consists of modified rows information.

The best way to estimate the maximum disk usage of binary logs is to measure the binary log size for a day and multiply it with the expire_logs_days value (default is 0 – no automatic removal). It’s important to set expire_logs_days so you can estimate the size correctly. By default, each binary log is capped around 1GB before MySQL rotates the binary log file. We can use a MySQL event to simply flush the binary log for the purpose of this estimation.

Firstly, make sure event_scheduler variable is enabled:

1
mysql> SET GLOBAL event_scheduler = ON;

Then, as a privileged user (with EVENT and RELOAD privileges), create the following event:

1
2
3
4
5
mysql> USE mysql;
mysql> CREATE EVENT flush_binlog
ON SCHEDULE EVERY 1 HOUR STARTS CURRENT_TIMESTAMP ENDS CURRENT_TIMESTAMP + INTERVAL 2 HOUR
COMMENT 'Flush binlogs per hour for the next 2 hours'
DO FLUSH BINARY LOGS;

For a write-intensive workload, you probably need to shorten down the interval to 30 minutes or 10 minutes before the binary log reaches 1GB maximum size, then round the output up to an hour. Then verify the status of the event by using the following statement and look at the LAST_EXECUTED column:

1
2
3
4
mysql> SELECT * FROM information_schema.events WHERE event_name='flush_binlog'\G
       ...
       LAST_EXECUTED: 2018-04-05 13:44:25
       ...

Then, take a look at the binary logs we have now:

1
2
3
4
5
6
7
8
9
10
11
12
13
mysql> SHOW BINARY LOGS;
+---------------+------------+
| Log_name      | File_size  |
+---------------+------------+
| binlog.000001 |        146 |
| binlog.000002 | 1073742058 |
| binlog.000003 | 1073742302 |
| binlog.000004 | 1070551371 |
| binlog.000005 | 1070254293 |
| binlog.000006 |  562350055 | <- hour #1
| binlog.000007 |  561754360 | <- hour #2
| binlog.000008 |  434015678 |
+---------------+------------+

We can then calculate the average of our binary logs growth which is around ~562 MB per hour during peak hours. Multiply this value with 24 hours and the expire_logs_days value:

1
2
3
4
5
6
mysql> SELECT (562 * 24 * @@expire_logs_days);
+---------------------------------+
| (562 * 24 * @@expire_logs_days) |
+---------------------------------+
|                           94416 |
+---------------------------------+

We will get 94416 MB which is around ~95 GB of disk space for our binary logs. Slave’s relay logs are basically the same as the master’s binary logs, except that they are stored on the slave side. Therefore, this calculation also applies to the slave relay logs.

Spindle Disk or Solid State?

There are two types of I/O operations on MySQL files:

  • Sequential I/O-oriented files:
    • InnoDB system tablespace (ibdata)
    • MySQL log files:
      • Binary logs (binlog.xxxx)
      • REDO logs (ib_logfile*)
      • General logs
      • Slow query logs
      • Error log
  • Random I/O-oriented files:
    • InnoDB file-per-table data file (*.ibd) with innodb_file_per_table=ON (default).

Consider placing random I/O-oriented files in a high throughput disk subsystem for best performance. This could be flash drive – either SSDs or NVRAM card, or high RPM spindle disks like SAS 15K or 10K, with hardware RAID controller and battery-backed unit. For sequential I/O-oriented files, storing on HDD with battery-backed write-cache should be good enough for MySQL. Take note that performance degradation is likely if the battery is dead.

We will cover this area (estimating disk throughput and file allocation) in a separate post.

Capacity Planning and Dimensioning

Capacity planning can help us build a production database server with enough resources to survive daily operations. We must also provision for unexpected needs, account for future storage and disk throughput needs. Thus, capacity planning is important to ensure the database has enough room to breath until the next hardware refresh cycle.

It’s best to illustrate this with an example. Considering the following scenario:

  • Next hardware cycle: 3 years
  • Current database size: 2013 MB
  • Current full backup size (week N): 1177 MB
  • Previous full backup size (week N-1): 936 MB
  • Delta size: 241MB per week
  • Delta ratio: 25.7% increment per week
  • Total weeks in 3 years: 156 weeks
  • Total database size estimation: ((1177 – 936) x 2013 x 156)/936 = 80856 MB ~ 81 GB after 3 years

If you are using binary logs, sum it up from the value we got in the previous section:

  • 81 + 95 = 176 GB of storage for database and binary logs.

Add at least 100% more room for operational and maintenance tasks (local backup, data staging, error log, operating system files, etc):

  • 176 + 176 = 352 GB of total disk space.

Based on this estimation, we can conclude that we would need at least 352 GB of disk space for our database for 3 years. You can use this value to justify your new hardware purchase. For example, if you want to buy a new dedicated server, you could opt for 6 x 128 SSD RAID 10 with battery-backed RAID controller which will give you around 384 GB of total disk space. Or, if you prefer cloud, you could get 100GB of block storage with provisioned IOPS for our 81GB database usage and use the standard persistent block storage for our 95GB binary logs and other operational usage.

Happy dimensioning!