Sunday, February 22, 2015

Getting started

What is the output of string INSTITUTION = "Richland \rCollege"; ?  There is also a major implementation problem with this code that you should be able to find it easy.

//Project 0: Hello World
//Revision: 1.0
//Date: 08/15/2011
//Description: Chapter 2 Summary

#include <iostream>
#include <string>
#include <math.h>
#include"header.h"

using namespace std;
using namespace richland;

int main(){

    string INSTITUTION = "Richland \rCollege";
    cout << INSTITUTION << endl;

    float PI = 3.14E0;
    cout << PI << endl;

    int value = -200;

    cin >> celsius>>value;

    fahrenheit = (float)((celsius)*(9 / 5) + 32);

    cout << fahrenheit<< endl;

    float ans = (fahrenheit - 32)*(5 / 9);

    cout << ans << endl;

    char ch = 'A';
    cout << "The \"value\" is: " <<ch<< endl;
    cout << "The value is: " << ch+1 << endl;
    cout << "The value is: " << static_cast<char>(ch+1)<< endl;

    string hello = "Hello";

    hello[3] = 'p';

    cout << "The \nstring \tis: " << hello << endl;

    return 0;
}

Pattern recognition and simple applications

You can use system("pause"); to call the pause utility in Windows systems to pause your program execution, but not many students realize that you can call other applications from C++ using this system call and have a useful application quickly.

Like translating with Google translate only requires you to look at the URL when you translate something and recognize the pattern in the URL.  You should see that you format the URL string as /#source_language/destination_language/text_to_translate.

Thus, in this example you are translating from German to English and launch Firefox to display the results.


//Required libraries
‪#‎include‬<iostream> //cin, cout, endl
#include<string> //string

//Namespace specified to avoid using std:: with cin, cout, endl, ...
using namespace std;

int main(int argc, char* argv[]){
      string program = "\"C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe\"  
                                  https://translate.google.com/#de/en/ich%20bin%20ein%20kinder";
      char* prog;
      prog = &program[0];
      system(prog);                         //The magic happens here
      return 0;                                 //return integer data type to OS to indicate errorlevel
}


Look up information about an IP address.

// exceptions
#include <iostream>
#include <string>


using namespace std;
void printMe(string url){
    cout << url << endl;
    system(&url[0]);
}

int main() {
    string url = "start \"C:\\Program\ Files\ \(x86\)\\Google\\Chrome\\Application\\Chrome.exe\"\ \"http\://144\.162\.1\.180\.ipaddress\.com/#reverseip\"";
    printMe(url);

    return 0;
}


Map a location using Google maps and the location's GPS coordinates.

#include <iostream>
using namespace std;

int main()
{
    //Call Chrome browser and open a GPS coordinate
    system("\"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe\" HTTP://MAPS.GOOGLE.COM/?q=32.921763,-96.729206");
       
    return 0;
}

Experiment with other useful application of this simple system call and go beyond what you use it for in standard applications.  Don't be afraid of creating something new and useful even if it looks simple.

Friday, November 21, 2014

Back to basics - Develop Forensic Analyst Mindset

This is a must watch video and must play game in order to even get started in developing an investigative mindset that is essential in incident response and cybersecurity investigations.

You can not just read about cybersecurity, you need to start developing skills, but will see that even basic skills an be challenging as you start using those skills in real environments.

This video will also show you that basic encoding can also be used by actual applications to store passwords.  It will also show you how Base64 works and how important log analysis is in this field.

http://youtu.be/9sGhmYlBrXU


Monday, October 27, 2014

Back to basics - Convert ICS to HTML and CSV

The discreet nature of calendar entries make seeing the over all picture or in investigations seeing a pattern of events is very difficult.  We need to be able to see the events in chronological order in a single document that we can use as a report or chart the values for easy understanding of events for non-technical professionals.

One of the most useful and versatile applications when it comes to Internet communication.  In this blog, I will explore the capability of this tool to convert .ics files, that is the only format that Google Calendar exports.

I also created a video to accommodate this blog post: http://youtu.be/WbBRhP6VXbs

So, in order to follow this process, you need to download and install Thunderbird, https://www.mozilla.org/en-US/thunderbird/download.

Login to your Google Calendar and create a new calendar.

Add new schedules to the new calendar and export the calendar as an .ics file.  Notice in the exported .ics file below the date and time stamps are not very user friendly to read, so it might need to be manually converted to make sense to non-technical professionals.  On the other hand, the HTML and CSV exported files below show the date and time stamps displayed in user friendly format that is easy to report and charted for easy interpretation without any manual conversion or risk of human error.


Import the .ics file into Thunderbird's Lightning add-on, that adds the calendar feature to Thunderbird.

Export the calendar as .ics, .html, or .csv format.


The HTML document can be directly used as a report, but the CSV format gives more flexibility to analyze the data or create chart to show clear patterns of events. 



Thus, digital forensics is about pattern recognition, but pattern can not emerge in some cases in its native format.  So, we need to focus on software capability to import certain file types and explore applications capability to export the data into different format that can aid our analysis and help identify patterns to solve cases.  

Back to basics - SQL and XSS

This post is accompanied by a video explaining this process and you can do about it.

http://youtu.be/-W3efiMT8H0

Sample web page to test Javascipts in browser.  Save the following code in a text file, name it test.html ad open it in your browser to see what it does.

<HTML>
<HEAD>>
              <script> window.open('http://zoltandfw.blogspot.com/','_blank')</script>
              <script> alert(document.cookie)</script>
              <script> alert("Your account has been compromised, please call (111)222-3333 to report!!!")               </script>
</HEAD>
<BODY>
              Just a test for JavaScripts
</BODY>
</HTML>

Sample log file entries showing details on what information might be collected in log files to investigate after the fact or monitor for real-time response.  

141027  7:39:45  122 Connect root@localhost on 
 122 Init DB badbank
 122 Query SELECT userid, accountnumber FROM badbank_accounts WHERE username='zoltan' AND password='9f1c050c2b226c2154d17a3ff9a602f6'
 122 Quit
141027  7:41:55  123 Connect root@localhost on 
 123 Init DB badbank
 123 Query SELECT userid, accountnumber FROM badbank_accounts WHERE username='zoltan' -- ' AND password='d41d8cd98f00b204e9800998ecf8427e'
 123 Quit
141027  8:00:30  124 Connect root@localhost on 
 124 Init DB badbank
 124 Quit
 125 Connect root@localhost on 
 125 Init DB badbank
 125 Quit
141027  8:42:47  126 Connect ODBC@localhost as  on 
 126 Query select @@version_comment limit 1
141027  8:42:55  126 Query show databases
141027  8:43:26  126 Query SELECT DATABASE()
 126 Init DB Access denied for user ''@'localhost' to database 'badbank'
141027  8:43:41  126 Quit

...

141027  9:04:20  130 Query select * from badbank_transactions
141027  9:05:22  213 Connect root@localhost on 
 213 Init DB badbank
 213 Query SELECT balance FROM badbank_accounts WHERE userid=61
 213 Quit
141027  9:05:37  214 Connect root@localhost on 
 214 Init DB badbank
 214 Query SELECT balance FROM badbank_accounts WHERE userid=61
 214 Query SELECT userid FROM badbank_accounts WHERE username='victim1'
 214 Query UPDATE badbank_accounts SET balance=balance-1 WHERE userid=61
 214 Query UPDATE badbank_accounts SET balance=balance+1 WHERE userid=60
 214 Query INSERT INTO badbank_transactions (userid,time,withdrawn,transactor,transfernote) VALUES (61,NOW(),1,60,'<script> alert(document.cookie)</script>')
 214 Query INSERT INTO badbank_transactions (userid,time,deposited,transactor,transfernote) VALUES (60,NOW(),1,61,'<script> alert(document.cookie)</script>')
 214 Quit
141027  9:05:41  215 Connect root@localhost on 
 215 Init DB badbank
 215 Quit
 216 Connect root@localhost on 
 216 Init DB badbank
 216 Quit

Sunday, October 26, 2014

Back to Basics - Information Assurance - Robots.txt

Note: If you like these blog posts, please click the +1 !

In some cases, you might need to, so called, crawl a web site to gather keywords or email addresses. Web sites can utilize the use of robots.txt files to prevent simple automated crawling of the entire website or part of it. The robots.txt file gives instructions to web robots about what not allowed on the web site using the Robots Exclusion Protocol. So, if a website contains a robots.txt like:

User-Agent: * 
Disallow: / 

This robots.txt will disallow all robots from visiting all pages on the web site. So, if a robot would try to visit a web site http://www.domain.topdomain/examplepage.html, then robots.txt in the root of the website http://www.domain.topdomain/robots.txt will not permit the robot to access the website. The robots.txt file can be ignored by many web crawlers, so it should not be used as a security measure to hide information. We should also be able to ignore such a simple security measure to investigate or to test web site security. I have mentioned in previous web posts the tool called wget that is a very useful tool to download a web page, website, or malware from the command line. This simple tool can also be configured to ignore the robots.txt file, but by default, it respects it, so you need to specifically tell the tool to ignore is directions.

wget -e robots=off --wait 1 -m http://domain.topdomain 
FINISHED --2014-10-26 11:12:36-- 
Downloaded: 35 files, 22M in 19s (1.16 MB/s) 

While not using the robots=off option will result in the following results.

wget -m http://domain.topdomain 
FINISHED --2014-10-26 11:56:53-- 
Downloaded: 1 files, 5.5K in 0s (184 MB/s)

It is clear to see in this example that we would have missed 34 files by not being familiar with this simple file and its purpose.

Using "User-Agent: * " is a great option to block robots of unknown name blocked unless the robots use other methods to get to the website contents. Let's try and see what will happen if we use wget without robots=off.

 
As you can see the User-Agent is set to wget/1.11( default Wget/version ), so as you can see in the list below, a robots.txt with the content list below would catch this utility and prevent it from getting the website contents.

Note: The orange highlighted three packets are the 3-way handshake, so the request for the resources with the User-agent settings is the fist packet following the three-way handshake.  That might be a good pattern for alarm settings.

wget also has an option to change the user-agent default string to anything the user wants to use.

wget --user-agent=ZOLTAN -m http://domain.topdomain 



As you can see in the packet capture, the user-agent was overwritten as the option promised, but the website still only allowed a single file download due to User-agent: * that captured the unknown string.  So, robots.txt can help protecting the website to a certain extent, but the -e robots=off option did get the whole website content even though the packet contained an unmodified User-agent settings.

robots.txt can have specific contents to keep unsafe robots away from a web site or to provide basic protection from these "pests":   ( This list is not exhaustive, but it can be a good source to learn about malicious packet contents and a good resource for further reading on each one of these software tools. )

User-agent: Aqua_Products
Disallow: /

User-agent: asterias
Disallow: /

User-agent: b2w/0.1
Disallow: /

User-agent: BackDoorBot/1.0
Disallow: /

User-agent: Black Hole
Disallow: /

User-agent: BlowFish/1.0
Disallow: /

User-agent: Bookmark search tool
Disallow: /

User-agent: BotALot
Disallow: /

User-agent: BuiltBotTough
Disallow: /

User-agent: Bullseye/1.0
Disallow: /

User-agent: BunnySlippers
Disallow: /

User-agent: Cegbfeieh
Disallow: /

User-agent: CheeseBot
Disallow: /

User-agent: CherryPicker
Disallow: /

User-agent: CherryPicker /1.0
Disallow: /

User-agent: CherryPickerElite/1.0
Disallow: /

User-agent: CherryPickerSE/1.0
Disallow: /

User-agent: CopyRightCheck
Disallow: /

User-agent: cosmos
Disallow: /

User-agent: Crescent
Disallow: /

User-agent: Crescent Internet ToolPak HTTP OLE Control v.1.0
Disallow: /

User-agent: DittoSpyder
Disallow: /

User-agent: EmailCollector
Disallow: /

User-agent: EmailSiphon
Disallow: /

User-agent: EmailWolf
Disallow: /

User-agent: EroCrawler
Disallow: /

User-agent: ExtractorPro
Disallow: /

User-agent: FairAd Client
Disallow: /

User-agent: Flaming AttackBot
Disallow: /

User-agent: Foobot
Disallow: /

User-agent: Gaisbot
Disallow: /

User-agent: GetRight/4.2
Disallow: /

User-agent: grub
Disallow: /

User-agent: grub-client
Disallow: /

User-agent: Harvest/1.5
Disallow: /

User-agent: hloader
Disallow: /

User-agent: httplib
Disallow: /

User-agent: humanlinks
Disallow: /

User-agent: ia_archiver
Disallow: /

User-agent: ia_archiver/1.6
Disallow: /

User-agent: InfoNaviRobot
Disallow: /

User-agent: Iron33/1.0.2
Disallow: /

User-agent: JennyBot
Disallow: /

User-agent: Kenjin Spider
Disallow: /

User-agent: Keyword Density/0.9
Disallow: /

User-agent: larbin
Disallow: /

User-agent: LexiBot
Disallow: /

User-agent: libWeb/clsHTTP
Disallow: /

User-agent: LinkextractorPro
Disallow: /

User-agent: LinkScan/8.1a Unix
Disallow: /

User-agent: LinkWalker
Disallow: /

User-agent: LNSpiderguy
Disallow: /

User-agent: lwp-trivial
Disallow: /

User-agent: lwp-trivial/1.34
Disallow: /

User-agent: Mata Hari
Disallow: /

User-agent: Microsoft URL Control
Disallow: /

User-agent: Microsoft URL Control - 5.01.4511
Disallow: /

User-agent: Microsoft URL Control - 6.00.8169
Disallow: /

User-agent: MIIxpc
Disallow: /

User-agent: MIIxpc/4.2
Disallow: /

User-agent: Mister PiX
Disallow: /

User-agent: moget
Disallow: /

User-agent: moget/2.1
Disallow: /

User-agent: mozilla/4
Disallow: /

User-agent: Mozilla/4.0 (compatible; BullsEye; Windows 95)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 2000)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 95)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 98)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows ME)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows NT)
Disallow: /

User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows XP)
Disallow: /

User-agent: mozilla/5
Disallow: /

User-agent: MSIECrawler
Disallow: /

User-agent: NetAnts
Disallow: /

User-agent: NetMechanic
Disallow: /

User-agent: NICErsPRO
Disallow: /

User-agent: Offline Explorer
Disallow: /

User-agent: Openbot
Disallow: /

User-agent: Openfind
Disallow: /

User-agent: Openfind data gathere
Disallow: /

User-agent: Oracle Ultra Search
Disallow: /

User-agent: PerMan
Disallow: /

User-agent: ProPowerBot/2.14
Disallow: /

User-agent: ProWebWalker
Disallow: /

User-agent: psbot
Disallow: /

User-agent: Python-urllib
Disallow: /

User-agent: QueryN Metasearch
Disallow: /

User-agent: Radiation Retriever 1.1
Disallow: /

User-agent: RepoMonkey
Disallow: /

User-agent: RepoMonkey Bait & Tackle/v1.01
Disallow: /

User-agent: RMA
Disallow: /

User-agent: searchpreview
Disallow: /

User-agent: SiteSnagger
Disallow: /

User-agent: SpankBot
Disallow: /

User-agent: spanner
Disallow: /

User-agent: suzuran
Disallow: /

User-agent: Szukacz/1.4
Disallow: /

User-agent: Teleport
Disallow: /

User-agent: TeleportPro
Disallow: /

User-agent: Telesoft
Disallow: /

User-agent: The Intraformant
Disallow: /

User-agent: TheNomad
Disallow: /

User-agent: TightTwatBot
Disallow: /

User-agent: Titan
Disallow: /

User-agent: toCrawl/UrlDispatcher
Disallow: /

User-agent: True_Robot
Disallow: /

User-agent: True_Robot/1.0
Disallow: /

User-agent: turingos
Disallow: /

User-agent: URL Control
Disallow: /

User-agent: URL_Spider_Pro
Disallow: /

User-agent: URLy Warning
Disallow: /

User-agent: VCI
Disallow: /

User-agent: VCI WebViewer VCI WebViewer Win32
Disallow: /

User-agent: Web Image Collector
Disallow: /

User-agent: WebAuto
Disallow: /

User-agent: WebBandit
Disallow: /

User-agent: WebBandit/3.50
Disallow: /

User-agent: WebCopier
Disallow: /

User-agent: WebEnhancer
Disallow: /

User-agent: WebmasterWorldForumBot
Disallow: /

User-agent: WebSauger
Disallow: /

User-agent: Website Quester
Disallow: /

User-agent: Webster Pro
Disallow: /

User-agent: WebStripper
Disallow: /

User-agent: WebZip
Disallow: /

User-agent: WebZip/4.0
Disallow: /

User-agent: Wget
Disallow: /

User-agent: Wget/1.5.3
Disallow: /

User-agent: Wget/1.6
Disallow: /

User-agent: WWW-Collector-E
Disallow: /

User-agent: Xenu's
Disallow: /

User-agent: Xenu's Link Sleuth 1.1c
Disallow: /

User-agent: Zeus
Disallow: /

User-agent: Zeus 32297 Webster Pro V2.9 Win32
Disallow: /

User-agent: Zeus Link Scout
Disallow: /

Saturday, October 25, 2014

Back to Basics - Intellectual Property

This post is about practicing critical thinking when it comes to intellectual property cases and to track down old or previous websites using copyright material. We can also us this technique to locate images where only the portion of the image is used or relevant to the case.