Google Filtering Search Results

2017-07-04 17:06:31

While cycling through the protonmail blogging section I stumbled upon this article. Protonmail is a service I have been using since the beta release. The group are as big of privacy advocates as you will see anywhere.

The article described above outlines Google completely removing protonmail from the search results for domain specific terms like 'secure email' or 'encrypted mail'. Claiming the result was algorithmically based, without any outside interference, is absolutely ludacris. The artcile shows search results from all of the other major search engines where protonmail is in the top 15. These types of situations are dangerous for the future of the Internet. As (insert coorporation) is effectively able to withold access to competitor services.

tag(s): protonmail

First Impressions On FreeBSD

2017-05-23 18:53:21

I've been up to quite a bit since my last post, so I have much to cover. I'll begin with probably the more interesting construct of my efforts. I SWITCHED MY HOME SERVER TO FREEBSD. Now, I interact with a pretty diverse range of computer science proffesionals at the University, and I was met with as many different opions. A good friend of mine (who is an avid OpenBSD user) toted nothing but praise and "tell me more!" on IRC. Yet another drove his eyes straight to the ground is disbelief. What's with all of the strongly opposed opinions?


I chose FreeBSD over the alternative BSD OS's because of the rock solid documentation provided in their handbook. As an avid Arch Linux user, I've grown accustom to some really good documentation (even if I'm guilty of skimping on my projects). Additionally, my favorite language, Mozilla's Rust, provides an unprecedented level of documentation for new users. FreeBSD obviously following suit.

Port And Packages

I will admit. It took a little while for me to understand the ports tree and interaction (or lack thereof) with the packages construct. Since, I've grown to love them both as they both provide unique functionality. As a novice BSD user I tnen to favor package, for their simplicity, but I've spent my fair share of time exploring ports as well.

My general understanding (please forgive my naivety) is that working with ports allows for much higher configuration levels. A user compiles programs from source, while being provided a simple check-box driven configuration interface. The level of configuation options is uncomparable to any other OS I've worked with.

For those less power hungry users (me included!) BSD systems provide the package infrastructure as well. Programs are pro-compiled into binaries, with a pretty standard configuration. These binaries are then installed on your machine along with dependencies. This option is more user-friendly, and definitely more familiar to a traditional linux user.

Current Setup

Right now my setup is not complex, which likely contributes to the ease of deployment. I'm running an emby-server instance as a home media server, this website runs off nginx with the obvious dependencies php-fpm, maraiadb and the like, and finally it's just a basic file store and backup for many machines. As I become more comfortable with the BSD style I plan on expanding uses into a home virtual assistant application I'm working on (yet to be made publicly avaiable) codenamed Olivia.

Ending Thoughts

It really is surprising how simple everything is on FreeBSD. Event little things like the dmesg output just seem to make sense. So far I haven't ran into any serious roadblocks, thus my impression is quite favorable. I have a feeling it's only a matter of time before I switch my daily laptop over! and I'm hoping to provide more posts as I learn more!

tag(s): freebsd

My Experiences with Arch Linux HIDS

2017-04-17 11:36:17

I spent the weekend attempting to harden my home server a bit and was uncomfortably dissapointed with the options available to Arch Linux. To begin my search I started with the Arch Security . There are many great suggestions here, of which I've adopted many. The real hope for my searches though was to install an all encompassing host-based intrusion detection system (HIDS). I had hopes the solution would involve packet analysis, file integrity checks, and log aggregation and monitoring. I quickly came to realize no such tool existed. Upon furter research I gathered I would be able to roll multiple tools together and acheive the level of protection I desired.

File Integrity Checks

I've chosen to install AIDE to cover this requqirement. This is a tool that creates a baseline database during an initial run and is periodically executed to compare and update against the baseline database. The goal is to determine file integrity exploits, for example adding malicious code to a users .bashrc file so that it executes on each login. AIDE is manifested as a low-level command line utility that can be manually executed or scheduled as a cronjob (I've opted for the later).

Network Analysis

There are many well known applications for network packet analysis. These include, but are not limited by, the two industry standards Snort and Suricata. I've had limited experiences with both but opted to bypass them for this deployment for a number of reasons.

As an alternative, I'm going to continue running the simple iptables firewall with sshguard over the top. Iptables, for those unfamiliar, provides a simple rulebase for which sources are allowed to access which ports on the local system. Since my usecase has only a few open ports, this should suffice. Additionally, sshguard provides detection of failed login attempts for ssh. Also if can be configured to drop packets from sources that are obnoxiously crawling the services (in the case of HTTP/HTTPS).

Log Monitoring

This is the main location where I have been dangerously let down. It's common knowledge that sysadmins generally give up analysis of logs when they are large. In my case, I don't want to spend more than 5 minutes a day going over logs for my home server. An aggregation tool could easily provide this level of simplicy. The first failure came when looking at logwatch. The failures of systemd (of which the greater linux community agrees are plentiful) show their ugly head here. logwatch, being quite outdated, relies on log files instead of the default journald system. The most sane way to pipe journald logging to general files is using the syslog-ng applicaton. In my opinion, this is insane. Deliberatley duplicating logs is unfounded, which is why I've opted to forgo the logwatch route. The next option is a newer project called journalwatch. Again, this is an underdeveloped solution. Finally, I've fallen back on something custom written, logram. I'm hoping this will be a lightweight, cron-able, solution and will update as the project progresses.

tag(s): linux security

Cyber-privacy Under the Current Administration

2017-04-12 19:41:01

Anyone paying a nickles worth of attention to the media see's the terrible things this administration is doing for American's online privacy. Currently there are two main fronts in the battle. Namely, crytography and net neutrality.


The obvious example here is the the government attempting to force Apple to unlock the iPhone of the San Bernadino shooting suspects. During the trial, it was made quite apparent that this wouldn't be an isolated instance and future requests are imminent. Though the requested Apple backdoor requirement was unnecessary due to outsourcing the crack, it is still unsettling to think elected officials agree that device manufacturers should put a backdoor into their cryptographic solutions.

Backdoors are a terrible solution. At least the cybersecurity community can agree upon this (if little else). The reasoning behind this is quite simple. If a manufacturer has backdoor access to all of their users data, another entity (perhaps malicous) will discover an exploit to access all of that data as well. It's criminal to believe top-tier hackers aren't capable of this. If history has proven anything in security it's that given time, all things can be broken.

Net Neutrality

With the passing of SJ Res 34 and new bills introduced to strip net neutraility measures put in place by the last administration, I'm getting more uncomfortable with who my information is available to. If you're unfamiliar with SJ Res 34, basically it allows ISP's to sell your information to advertising companies, something it was previously banned from doing. The proponents argument was that hindering ISP's from monetizing customer information put them at a disadvantge to top Cyber companies like Google, Facebook, and the like. Personally, I think this argument is ludacris. Recent ideology is that the Internet is a basic human requirement. Try to picture you're life for a day without access to the Internet. It's arguably impossible, whereas alternatives are plenty for Google's services (search, maps, mail, etc) and Facebook's hollow relationship platform. I fear the passing of these measures is a small glimpse into what will happen to the Internet during the next 4 years. With any luck, officals voting on the legislation will heed the advice of those who invented the Internet. One example being Tim Berners-Lee who has been outspoken in his opposition, going so far as to say VPN's aren't a good solution because it show's we're complacent with the changes (I'm paraphrasing of course).

Possible Counter-measures?

Much coverage on this topic focuses on VPN's, and while I think they are a good solution, they are not a perfect solution. Nothing is a perfect solution. The transfer of your privacy trust from the US government, ISP's, etc to the company with whom you're now effectively proxying your traffic is a personal preference.

Closing Remarks

Obvioulsy, I've missing about a million and a half link opportunities in this post (I got lazy), but if you're interested a quick DuckDuckGo search will fill the void. I've never been a very politically active individual but I might bust out a new pair of stomping boots before this is all said and done. Personally, I think legislation reducing the privacy of individuals is going to contribute to a massive increase in adoption of privacy enabling technologies. We've already seen this as app store downloads of the encrypted SMS app Signal soared following the election. With the ease of adoption and use of privacy tools increasing I don't see how the governments current approach is going to solve any of their issues.

tag(s): privacy

Proddle: Rust and Networking

2017-04-06 09:15:52

If you're unfamiliar with my project, Proddle, please visit the site and take a gander. If you're still interested please contact me to get involved!

Using Rust for Proddle

I've chosen to write the application using Mozilla's Rust. It's a newer language focused on providing low level operation combined with extremely safe variable access. It does this through it's integration of a "lifetime" paradigm, where each variable has a duration it's active for. The lifetime is automatically identified by the compiler and analyzed for safety infractions. A deeper analysis is not in the scope of this post, but I'll forward you to the official rustbook where the language is extensively documented. It's a great starting point.

Capnproto & Tokio

Upon original inception of Proddle I chose to use capnproto for message format, as the next iteration of Google's protobuf (written by the same developer) it seemed like the logical choice. Everything worked great until the rust crate for capnproto switched internally to using a all encompassing networking crate tokio. As of right now the tokio project is the bane of my existance. This is mostly based on an unclosed file descriptor bug (which I thought impossible with Rust). I understand that the project is in it's infancy, but the documentation is terrible. I found myself scouring through the source code to "understand" certain constructs.

I'm not going to dive into internals of the project, because I've put that phase of my life behind me. Instead I'll provide a brief overview. The entire framework is built upon event loops and futures. Yes, even the client. An event loop is basically a for loop listening on a channel for events. Futures are closures which allow for the result to be passed into code before it's actually computed. Both constructs are fundamental for server programming (albiet most frameworks make them transparent). Tokio exposes the user to everything, needlessley adding complexity. The tokio-proto crate is provided to obfuscate deep internals, but in my experience it falls short as well.

I don't want my ramblings to be misinterpreted as malice. I think the tokio project provides a solid networking foundation. I think that the work, and all work of Alex Crichton is improving the foundation of Rust. Which I am a strong advocate. It's just that the complex framework is NOT an uniform solution for all network applications. Tokio provides mechanisms for pipelined vs streamed protocols, for multiplexing traffic, among others. This is functionality that is not required for simple projects. At this stage, the internals are quite under-developed and noticably under-documented. I will continue to follow the project and expect nothing but improvements.

Retreat to Tcp Sockets

Admitting defeat in the realm of all things tokio I opted to fallback to trusty tcp sockets. In the standard library Rust provides access similar to that of C/C++. The serde framework provides seamless serialization of Rust structs to many formats including json, bson, bincode, etc. I believe it's used internally in the capnproto Rust crate. I opted to use the bincode as my transport format. The sample struct definiton is provided below.

        extern crate serde;
        extern crate serde_derive;

        #[derive(Deserialize, Serialize)]
        struct Message {
            foo: Option,
            bar: u64,

This is a very simple struct with just two fields. Below is the code I'm using in Proddle to read/write through a tcp socket.

        extern crate bincode;

        use std::io::{Read, Write};
        use std::net::TcpStream;

        pub fn message_to_stream(message: &Message, stream: &mut TcpStream) -> Result<(), ProddleError> {
            let encoded: Vec = bincode::serialize(message, Infinite).unwrap();
            let length = encoded.len() as u32;

            try!(stream.write(&[(length as u8), ((length >> 8) as u8), ((length >> 16) as u8), ((length >> 24) as u8)]));


        pub fn message_from_stream(stream: &mut TcpStream) -> Result {
            let mut length_buffer = vec![0u8; 4];
            try!(stream.read_exact(&mut length_buffer));
            let length = ((length_buffer[0] as u32) | ((length_buffer[1] as u32) << 8) | ((length_buffer[2] as u32) << 16) | ((length_buffer[3] as u32) << 24)) as usize;

            let mut byte_buffer = vec![0u8; length];
            try!(stream.read_exact(&mut byte_buffer));
            let message = try!(bincode::deserialize(&byte_buffer));

As you can see in the message to stream function we serialize the Message struct using the bincode serialize method, it should be noted that this is available because we've derived the Deserialize/Serialize traits on the Message struct above. The we write a u32 length field to the stream followed by the encoded buffer. The read function just does the opposite. I know, creating a new buffer for reading the length and actual bytes is bad practice. I'll fix this in the future.

Ending Thoughts

There you have a simple, lightweight alternative to many complex networking frameworks. With this change I was able to remove 6 crate dependencies of the project (2 capnproto, 4 tokio). I'm a strong believer that the more code you include, the more things that can break.

tag(s): proddle rust tutorial

Replacing HTTP images with Bettercap

2017-04-03 07:57:12

I've recently been introduced to the MITM tool bettercap. The project is the same idea as the ettercap tool of old, with many modern improvements.


Bettercap is available as part of the blackarch repositories (if you're not familiar with the project I insist you take a look). The site provides a tutorial on a blackarch install or enabling the repository in a base arch install (I prefer the latter).

Alternatively the bettercap install page presents myriad of installion options including kali repos, git, and GEM. Whatever flavour of linux (and possibly windows?!) you run installation is possible.


evilsocket provides a very simple bettercap proxy module to analyze HTTP traffic and replace the img tag url with that of your locally running server (run by bettercap). To retrieve the file we can issue a simple wget command.


After that the attack is as simple as creating a directory of images, choosing a host, and issuing the attack. I've chosen my images directory as 'images' and the host as ''. The command to initiate the attack is listed below.

        bettercap -I wlp2s0 -S ARP -X --proxy --proxy-module replace_images.rb --httpd --httpd-path images --target

Many of these flags are not necessary, however I include them for completeness.

  • -I wlp2s0: Specifies which interface to execute on.
  • -S ARP: Use ARP to spoof traffic through yourself (default).
  • -X: Print packet information to screen.
  • --proxy: Act as a proxy for HTTP traffic.
  • --proxy-module replace_images.rb: Use the replace images module to modify proxied traffic.
  • --httpd --httpd-path images: Start an HTTP daemon with 'images' as the root directory.
  • --target Only intercept traffic from this particular host.

Try browsing to an HTTP page with images on the client device and you'll see all of the images replaced by random images in your 'images' directory.

Exploit Notes

With the increasing adoption of HTTPS this attack is loosing traction. Bettercap, by default, attempts SSL stripping and no doubt supports a variety of SSL downgrading exploits. I haven't had much success with these in the past.

tag(s): bettercap tutorial

Buzzword Overload

2017-04-02 08:53:18

Artificial intelligence, darknet, Russian hackers, machine learning. We've all seen these words and phrases in countless headlines over the past few years. I fear things are only going to get worse as cyber security becomes a larger focal point to the uninformed public.

Artificial intelligence and machine learning. Fortunately, many of us, as passionate students of computer science, understand the implications of using these terms. Unfortunately these terms manifest themselves to the general public as a possibility for hostile machine takeover. Preying on the uninformed public to sell headlines is irresponsible journalism at best. And I hope that this trend will soon die in favor of the truth, new algorithms are being discovered that improve performance of many applications. That's it, simple enough. Plainly, computers cannot think for themselves.

Moving on to the darknet, darkweb, deepnet, etc. It's quite apparent that journalists covering these topics have no understanding that TOR is an overlay network and not completely disconnect from the Internet. Additionally, it's painted as a cesspit of nefarious individuals hell bent on destroying all morality. Not a haven for political dissidents, social outcasts, and the like. Finally, quit throwing in a background image of a guy in a zipup hoodie. Some hackers choose to wear T-Shirts, albeit black.

Blaming the entirity of hacks on Russian/Chinese hackers is akin to the shady journalism surrounding AI. To stay short, many articles present no sane argument instead settling for gerrymandering of uninformed politicians.

Irregardless of where you stand on these topics I think we can all agree that these terms are used far too frequently. It pains me to flip through new stories everyday, something I remain foolishly hopeful will change.

tag(s): artificial intelligence darknet

Clearing the Bush: Part Dos

2017-03-30 12:19:33

Here's my second attempt at writing this blog. I've decided on the rewrite after a failed attempt using the wordpress framework. My qualms with the aforementioned reach far beyond the scope of this post, so I'll save them for later. This project is available on my github as blogrhino.

Overall this iteration presents many improvements.

  • Dynamically Served Web Pages
  • Theme-able
  • Ability to Query Posts

To start the OG bushpath website was served using static web pages with a very elementary postgres backend. Blogrhino uses a mariadb backend that houses everything related to the pages. This allows for blogrhino to serve much more than my personal blogging website.

Currently the only theme is 'talking-turkey'. My hope, as this project progresses, is to implement easily interchangable themes. With that in mind, I'm building the functionality into the framework from the beginning.

I know, hold the rotten tomatoes, I'm really stretching for this third point. Making posts searchable by both keywords and tags is a fundamental element in any modern blogging framework. Anything to keep the masses happy.

As the project progresses I plan on continually updating in further posts. Following my first foray into front end development, I began with a post highly critical of the discipline. Since working on two projects I've loosened my rageful stance. I've learning a ton and feel much more confident progressing towards my ultimate goal. To learn web application development as a precursor to deepening my security background.

tag(s): blogrhino html