After 2 years I have a new template. The old template was but ugly, contained pointless functions like dynamically changing look & feel to ensure that no matter what option was chosen, everything was always broken. Other, necessary functions like contact information it hid deep within the code, never to be seen by mere humans. I've watched my readership dwindle from thousands a day to a few dozen, as presumably they escaped to more sanely-coded pastures.
I had come to accept all of this until today, when I found myself extending some custom rich snippets. Over the years, you see, I've been fighting something of a crazed Google war with a dermatologist from California. A dermatologist who by happenstance is named Joshua Wieder. For some time a detente had been reached, the good doctor opting for the more formal Joshua while I controlled the top results for the more casual Josh. Then, a year passed in which I was focused on actual work. My domain name lapsed and was claimed by profiteers. The dermatologist invaded my top 10 results.
Rich snippets are part of my counter attack; an ingenious plan to reclaim internet territory lost without having to resort to remuneration. That's when I noticed that Blogger's template had broken rich snippet functionality.
I had only been angling to add the "Person" Microdata ItemType to my Contact Information page. I added the necessary language to my existing markup (those interested in some basic instruction should check out this simple how-to). However, when I went to Google's Webmaster Tools to check if it could reach my changes, I was surprised to see a number of glaring, week-old errors:
Clicking on an error and selecting the "Test Live Data" option, I quickly came across the issue. Rich Snippets require calling an outside formatting file; similar to how one might reference an Atom file in a sitemap. In the case of Blogger's dynamic templates, the "h-entry" format is used to provide Rich Snippets of new Blog Posts. The breakdown has occurred because they are using a dead link to the format page.
Rather than find the reference using Blogger's obtuse file management system, I have changed from a dynamic to static template, which has resolved the issue for now. Google can read the structured data I've formatted for them, and users now get to read a slightly-less hideous fixit guide.
Sunday, September 28, 2014
Friday, September 26, 2014
Patching Your Redhat Server for the Shellshock Vulnerability
Introduction
Alright guys, this is a biggie. Shellshock allows remote code execution and file creation for any server relying on bash v3.4 through v1.1. If you are using Redhat or CentOS and the default shell, your server is vulnerable.
The patching history was sketchy, as well. If you patched immediately when the bug came out using CVE-2014-6271, you are still likely vulnerable (as of right now, 9/26/2013 12:50PM EST). Run the following to apply the patch:
You need CVE-2014-7169 if you are using Red Hat Enterprise Linux 5, 6, and 7. Note that 2014-7169 DOES NOT address the following operating systems, which as of right now are still not fully patched: Shift_JIS, Red Hat Enterprise Linux 4 Extended Life Cycle Support, Red Hat Enterprise Linux 5.6 Long Life, Red Hat Enterprise Linux 5.9 Extended Update Support, Red Hat Enterprise Linux 6.2 Advanced Update Support, and Red Hat Enterprise Linux 6.4 Extended Update Support
If you applied CVE-2014-6271 and need the rest of the patch, reference RHSA-2014-1306
Diagnosis / Am I Vulnerable?
Copy, paste and run the following command from your shell prompt:
If the output of the above command contains a line with only the word "vulnerable" you are still vulnerable. Depending on what version you are using and what patches you have applied, the command output will be different.
A completely vulnerable system will do this:
Systems patched with CVE-2014-6271 but not CVE-2014-7169 will do this:
Systems that used the RHSA-2014-1306 patch do this:
Next we have to test the file creation aspect of the Shellshock vulnerability. Execute the following command, in its entirety, from your shell:
This is what a non-vulnerable system will provide:
If you're extra paranoid like me you may just want to double check that there is no file "echo" in your /tmp directory. A system that is still vulnerable will respond to the command by providing the data and time according to your system clock and creating the file. The initial output will look similar to this:
Please guys, check your servers and get this wrapped up as quickly as possible. I can't stress enough how dangerous this vulnerability is, particularly given how many administrators allow direct access to their servers through one port or another. Feel free to contact me if you have any additional questions or concerns. I am happy to help.
Alright guys, this is a biggie. Shellshock allows remote code execution and file creation for any server relying on bash v3.4 through v1.1. If you are using Redhat or CentOS and the default shell, your server is vulnerable.
The patching history was sketchy, as well. If you patched immediately when the bug came out using CVE-2014-6271, you are still likely vulnerable (as of right now, 9/26/2013 12:50PM EST). Run the following to apply the patch:
#yum update bash
You need CVE-2014-7169 if you are using Red Hat Enterprise Linux 5, 6, and 7. Note that 2014-7169 DOES NOT address the following operating systems, which as of right now are still not fully patched: Shift_JIS, Red Hat Enterprise Linux 4 Extended Life Cycle Support, Red Hat Enterprise Linux 5.6 Long Life, Red Hat Enterprise Linux 5.9 Extended Update Support, Red Hat Enterprise Linux 6.2 Advanced Update Support, and Red Hat Enterprise Linux 6.4 Extended Update Support
If you applied CVE-2014-6271 and need the rest of the patch, reference RHSA-2014-1306
Diagnosis / Am I Vulnerable?
Copy, paste and run the following command from your shell prompt:
env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
If the output of the above command contains a line with only the word "vulnerable" you are still vulnerable. Depending on what version you are using and what patches you have applied, the command output will be different.
A completely vulnerable system will do this:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" vulnerable
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'
bash: error importing function definition for `BASH_FUNC_x'
test
Systems patched with CVE-2014-6271 but not CVE-2014-7169 will do this:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
bash: error importing function definition for `BASH_FUNC_x()'
test
Systems that used the RHSA-2014-1306 patch do this:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `BASH_FUNC_x'
test
Next we have to test the file creation aspect of the Shellshock vulnerability. Execute the following command, in its entirety, from your shell:
cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo
This is what a non-vulnerable system will provide:
$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo
date
cat: /tmp/echo: No such file or directory
If you're extra paranoid like me you may just want to double check that there is no file "echo" in your /tmp directory. A system that is still vulnerable will respond to the command by providing the data and time according to your system clock and creating the file. The initial output will look similar to this:
$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo
bash: x: line 1: syntax error near unexpected token `='
bash: x: line 1: `' bash: error importing function definition for `x'
Fri Sep 26 11:49:58 GMT 2014
Please guys, check your servers and get this wrapped up as quickly as possible. I can't stress enough how dangerous this vulnerability is, particularly given how many administrators allow direct access to their servers through one port or another. Feel free to contact me if you have any additional questions or concerns. I am happy to help.
Labels:
bash,
CentOS,
exploit,
patching,
redhat,
shell,
shellshock,
updates,
vulnerability
Tuesday, September 9, 2014
RedIRIS Compromised?
For those not familiar with Spanish ISPs, RedIRIS is Spain's National Research and Education Network. They are part of Consorci de Serveis Universitaris de Catalunya and Forum of Incident Response and Security Teams. Essentially its an organization devoted to university networking projects and advanced R&D. They get their own nice big netblock to mess around with (in this case 193.144.0.0/14). Similar projects in the US would be CalREN, Internet2 and LambdaRail.
I'm seeing what looks like malicious scanning from the RedIRIS netblock, like this:
The traffic lacks the usual signs of IP spoofing. Spoofed scanning I come across tends to show multiple IPs attempting to make the same types of connections within a somewhat short period of time. With this, the access attempts are unique. If these connections are spoofed, they would be pointless - not enough connections to add any server load for a DoS attempt, and no way to route a reply. Days ago, and with another target host, I say an identical block of requests from a server in a California data center. This all points to a bot net looking to expand itself.
I've tried to contact RedIRIS, but they are a big organization and my Spanish is barely comprehensible. If anyone affiliated with RedIRIS, FIRST or CSUC reads this, please email me or leave a comment below with your email. I would be happy to provide additional data that would help to identify and remove the source of malicious traffic.
As many readers already know, the files this scan looks for should never be accessible to public traffic. Best practices indicate removing installation files once application install is completed. Keeping configuration files in uniquely named directories doesn't hurt, either.
I'm seeing what looks like malicious scanning from the RedIRIS netblock, like this:
**** - - [08/Sep/2014:18:54:34 -0400] "GET /muieblackcat HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:34 -0400] "GET //phpMyAdmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:34 -0400] "GET //phpmyadmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:35 -0400] "GET //myadmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:35 -0400] "GET //mysqladmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:35 -0400] "GET //pma/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:36 -0400] "GET //mysql/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:36 -0400] "GET //scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:37 -0400] "GET //MyAdmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:37 -0400] "GET //typo3/phpmyadmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:37 -0400] "GET //phpadmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:38 -0400] "GET //pma/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:38 -0400] "GET //web/phpMyAdmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:39 -0400] "GET //xampp/phpmyadmin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:39 -0400] "GET //web/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:39 -0400] "GET //php-my-admin/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
**** - - [08/Sep/2014:18:54:40 -0400] "GET //websql/scripts/setup.php HTTP/1.1" 404 15 "-" "-"
The traffic lacks the usual signs of IP spoofing. Spoofed scanning I come across tends to show multiple IPs attempting to make the same types of connections within a somewhat short period of time. With this, the access attempts are unique. If these connections are spoofed, they would be pointless - not enough connections to add any server load for a DoS attempt, and no way to route a reply. Days ago, and with another target host, I say an identical block of requests from a server in a California data center. This all points to a bot net looking to expand itself.
I've tried to contact RedIRIS, but they are a big organization and my Spanish is barely comprehensible. If anyone affiliated with RedIRIS, FIRST or CSUC reads this, please email me or leave a comment below with your email. I would be happy to provide additional data that would help to identify and remove the source of malicious traffic.
As many readers already know, the files this scan looks for should never be accessible to public traffic. Best practices indicate removing installation files once application install is completed. Keeping configuration files in uniquely named directories doesn't hurt, either.
Labels:
CSUC,
FIRST,
ip spoofing,
malicious traffic,
NREN,
RedIRIS,
scanning,
spain
Wednesday, September 3, 2014
Schadenfreude + Irony = Blog Post
An Example of Bad Referrer Traffic and How to Block it Using ModRewrite and IPTables
Getting these on one of my web servers on an almost daily basis:
The traffic comes from all sorts of different IPs that are owned by China Telecom. 114.232.243.86, 114.231.42.219, 222.209.137.232, 222.209.152.192, 118.113.227.95.
The host I am seeing this on does not need to speak to anyone or anything in China, so I used IPTables to filter the entire netblocks I see hits from. Here is an example of a filtering rule along with a little note for myself. Notice that this rule assumes two nonstandard chains - BLACKLIST and LOGDROP - that I use to organize my ruleset.
Because I'm not sure which IP the next connection will come from, but all of the connections rely on the hostname hotel.qunar.com, I also set up a RewriteMap in Apache for that hostname. RewriteMap directives have to be added at the virtualhost or server level - they can't be placed within an .htaccess file. So I added the following to an Apache Conf include file (again to keep things organized):
While my deflector.map file looks like this (make sure that the file has permissions necessary for Apache to read it):
The "-" after the bad hostname is a directive that tells Apache where to send the connection. "-" tells the referrer to connect back to itself. However you can send the traffic to a page informing the scanner that you know what they are up to if you are feeling confrontational (and don't mind the additional load).
Your deflector.map doesn't have to be a text file. Using a dbm hash file is both possible and considerably faster. Read more about the RewriteMap directive at the Apache project website.
114.232.243.86 - - [01/Sep/2014:09:51:34 -0400] "GET http://hotel.qunar.com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1" 404 15 "http://hotel.qunar.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36"
The traffic comes from all sorts of different IPs that are owned by China Telecom. 114.232.243.86, 114.231.42.219, 222.209.137.232, 222.209.152.192, 118.113.227.95.
The host I am seeing this on does not need to speak to anyone or anything in China, so I used IPTables to filter the entire netblocks I see hits from. Here is an example of a filtering rule along with a little note for myself. Notice that this rule assumes two nonstandard chains - BLACKLIST and LOGDROP - that I use to organize my ruleset.
-A BLACKLIST -s 114.224.0.0/12 -m comment --comment "Chinanet Hotel Qunar Referrer" -j LOGDROP
Because I'm not sure which IP the next connection will come from, but all of the connections rely on the hostname hotel.qunar.com, I also set up a RewriteMap in Apache for that hostname. RewriteMap directives have to be added at the virtualhost or server level - they can't be placed within an .htaccess file. So I added the following to an Apache Conf include file (again to keep things organized):
##
## Bad Referrer Deflection via RewriteMap
##
RewriteEngine on
RewriteMap deflector txt:/$PATHTOFILE/deflector.map
RewriteCond %{HTTP_REFERER} !=""
RewriteCond ${deflector:%{HTTP_REFERER}} =-
RewriteRule ^ %{HTTP_REFERER} [R,L]
RewriteCond %{HTTP_REFERER} !=""
RewriteCond ${deflector:%{HTTP_REFERER}|NOT-FOUND} !=NOT-FOUND
RewriteRule ^.* ${deflector:%{HTTP_REFERER}} [R,L]
While my deflector.map file looks like this (make sure that the file has permissions necessary for Apache to read it):
##
## deflector.map
##
http://hotel.qunar.com -
The "-" after the bad hostname is a directive that tells Apache where to send the connection. "-" tells the referrer to connect back to itself. However you can send the traffic to a page informing the scanner that you know what they are up to if you are feeling confrontational (and don't mind the additional load).
Your deflector.map doesn't have to be a text file. Using a dbm hash file is both possible and considerably faster. Read more about the RewriteMap directive at the Apache project website.
Monday, September 1, 2014
Thank You
The website is rapidly approaching a quarter million hits(!). I haven't really done much to plug the site besides announcing new posts on Twitter and Google Plus, of which combined I have about 30 followers. Some time ago I used the free Bing and Adwords credits they give you for signing up. It never drove any real traffic to the site, and I never renewed after the trial. The only explanation I can think of is that people are reaching the site while looking for a way to fix a vexing issue, which is exactly what I had hoped for.
Well, in all fairness, 14% (at most) seem to be looking for free Windows product keys (and leaving disappointed - sorry folks). All in all, that wave was about 33,000 views, which leaves over 200,000. Our average post gets about 1,000 views, with quite a few getting around 5,000 to 10,000.
India is the second largest source of traffic, behind the US and before the UK.
Perhaps most surprising, my post about qBasic Gorillas is the most popular (behind the bit about support keys that everyone thinks will give them free Windows, of course). Will wonders never cease?
Thank you to everyone who continues to find value in the site. I will do my best to keep providing information to help you when you need it.
-Josh
Well, in all fairness, 14% (at most) seem to be looking for free Windows product keys (and leaving disappointed - sorry folks). All in all, that wave was about 33,000 views, which leaves over 200,000. Our average post gets about 1,000 views, with quite a few getting around 5,000 to 10,000.
India is the second largest source of traffic, behind the US and before the UK.
Perhaps most surprising, my post about qBasic Gorillas is the most popular (behind the bit about support keys that everyone thinks will give them free Windows, of course). Will wonders never cease?
Thank you to everyone who continues to find value in the site. I will do my best to keep providing information to help you when you need it.
-Josh
Blogger stats summary |
Subscribe to:
Posts (Atom)