Profile cover photo
Profile photo
Daemon Security
6 followers -
Security solutions to simplify the business process.
Security solutions to simplify the business process.

6 followers
About
Daemon Security's posts

Post has attachment
[01/18/2017] Running Bro in a FreeBSD Jail

A few weeks ago, a user on the Bro IDS mailing list was looking for a way to run Bro in a FreeBSD jail. FreeBSD jails provide the foundation of operating system-level virtualization, later utilized and enhanced by Solaris zones, and those containers that everyone thinks are something new. To avoid going on a complete rant, I recommend the following write-up as an overview of FreeBSD jails:

https://www.freebsd.org/doc/handbook/jails.html

The purpose of this howto is to document that basic steps necessary to use Bro within a FreeBSD jail. For jail management, ezjail is normally the recommended way to setup jails. With a recent copy of FreeBSD (FreeBSD 11), run the following commands to install the ezjail package and setup your Bro jail (Note: vtnet0 is my network interface on the host, which could also be em0, or re0 in your case):

# pkg install -y ezjail
# ezjail-admin install -p
# ezjail-admin create bro 'vtnet0|192.168.1.20'
# cat << EOF > /usr/jail/bro/etc/rc.conf
rpcbind_enable="NO"
cron_flags="$cron_flags -J 15"
syslogd_flags="-ss"
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
sshd_enable="NO"
EOF
# echo 'nameserver 192.168.1.1' > /usr/jails/bro/etc/resolv.conf
# sysrc ezjail_enable=yes
# ezjail-admin start bro

The settings for rc.conf are recommended for jails, but they can be adjusted as needed, since Bro can be configured to send out email alerts. Now with the jail running, the Bro package can be installed within the jail using pkg with the following command:

# pkg -j bro install -y bro

Before Bro can be used, the jail configuration needs to be updated to allow the bro jail access to bpf, the berkley packet filter used for reading packets from a network device. By design, the jail does not allow access to listen on a network interface. Modifying the devfs rules loaded by the jail will allow for reading of packets from the interface used to create the jail (which matches the host interface). Run the following commands to create a special devfs rule that will allow the jail access to bpf device:

# cat << EOF >> /etc/devfs.conf
[devfsrules_jail_bro=7]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add include $devfsrules_unhide_bpf
EOF
# sed -i '' -e 's/devfsrules_jail/7/' /usr/local/etc/ezjail/bro

The jail is now setup to work with Bro, and the necessary Bro configuration updates can be made. The following commands are the bare minimum to get Bro running, please see the Bro documentation for additional configuration settings (Note: replace vtnet0 with the same interface used to create the jail):

# sed -i '' -e 's/^127.0.0.1.*/127.0.0.1 bro localhost/' /usr/jails/bro/etc/hosts
# sed -i '' -e 's/^interface=.*/interface=vtnet0/' /usr/jails/bro/usr/local/etc/node.cfg
# sed -i '' -e 's/^host=.*/host=192.168.1.20/' /usr/jails/bro/usr/local/etc/node.cfg

Now after restarting the jail, you can start Bro using the 'broctl' script:

# ezjail-admin restart bro
# jexec bro /usr/local/bin/broctl deploy

Bro will now be running within the jail, and from the host you can analyze the logs in /usr/jails/bro/usr/local/spool/bro.

There are additional configurations that can be made to this Bro jail, including the usage of a non-root user for Bro. With this configuration, if there was a vulnerability within Bro that could be exploited, the attacker could only break out into the jail file tree structure. Another caveat is that the jail is setup to start when the FreeBSD host starts, but in order to make Bro start automatically, you can add the following to rc.local within the jail

echo '/usr/local/bin/broctl deploy' >> /usr/jails/bro/etc/rc.local

Author: Michael Shirk

Post has attachment
[11/28/2016] Recap of BroCon and SuriCon 2016

In September, I gave a talk about running Bro NSM on BSD operating systems. The talk was well received and stirred interest in the BSD operating systems and their use for network security monitoring. The slides (and at the some point the video) for my talk are posted here:

https://www.bro.org/community/brocon2016.html

One of the interesting things I learned at this conference, was the important role that FreeBSD plays in regards to Bro. FreeBSD and Linux are treated as the tier 1 operating systems for which Bro must work on before a software update is released. After my talk was given, the updated netmap code was merged into the FreeBSD 12-CURRENT tree to add better support for packet I/O on FreeBSD which Bro can be configured to use. For those interested in running Bro, Bro 2.5 is now available for download from here:

http://www.bro.org/downloads/bro-2.5.tar.gz

The port/pkg updates should be available soon for FreeBSD. I will be working to get 2.5 into OpenBSD 6.1, as the Bro port was updated to 2.4.1 in September.
Not as much BSD related, but SuriCon 2016 was held in Washington, DC and was a great conference discussing the open source IDS/IPS engine Suricata. Users of Suricata on FreeBSD can compile in support for netmap, to provide fast packet I/O for use with IDS. There are configurations with netmap-fwd that can be used with ipfw to provide fast IPS capabilities that I am looking to further test. I gave a lighting talk on pulledpork, the signature update script that works with Snort and Suricata. A lot of people have forked pulledpork to suit their own needs and there seems to be some common themes that could be incorporated into pulledpork to provide value for everyone. I fully recommend these conferences to anyone interested in network security monitoring and open source security tools as I really enjoyed the content.

Author: Michael Shirk

Post has attachment
[07/13/2016] vmrun.sh - The default way to use bhyve on FreeBSD.
bhyve is a type-2 hypervisor that is installed by default in FreeBSD 10+. One of its greatest features is how simple the interface is to create and run virtual machines on FreeBSD. Since bhyve first appeared in FreeBSD 10, the operating systems support has expanded beyond FreeBSD and OpenBSD to include most Linux distributions and Microsoft Windows. In FreeBSD 11, bhyve will feature graphical support (UEFI-GOP) allowing for graphical UEFI installations. There are several tools that have been created to make the managing of bhyve VMs as easy as the managing of FreeBSD jails.
iohyve - bhyve management with ZFS support
https://github.com/pr1ntf/iohyve
vmrc - VM rc script for managing bhyve VMs
https://github.com/michaeldexter/vmrc
In addition to these management tools, the FreeBSD Handbook provides details for a script that is provided with the base OS which makes it easy to use bhyve VMs. The script is called vmrun.sh, and is provided at the following location:
/usr/share/examples/bhyve/vmrun.sh
Before using this script, there are some necessary steps to setup networking and storage for use with bhyve VMs. These steps are fully documented in the FreeBSD Handbook, but here are the necessary commands to load the vmm kernel module, and setup networking to allow for the tap interface to be used by bhyve VMs:
# kldload vmm
# ifconfig tap0 create
# sysctl net.link.tap.up_on_open=1
net.link.tap.up_on_open: 0 -> 1
# ifconfig bridge0 create
# ifconfig bridge0 addm re0 addm tap0
# ifconfig bridge0 up
In this example, re0 is the interface of the host, which is added to bridge0 with a tap0 interface added for the bhyve VM. If you would like this to be a persistent configuration, take a look at the FreeBSD Handbook for the specific configurations you will need. Once you have the tap0 interface available, you will need to create a virtual disk to be used by the VM. The FreeBSD Handbook details the way to create a disk image file (.img) for the virtual disk. For this howto, a ZFS Volume (zvol) will be used. Run the following commands to create a zvol (ensure you have enough disk space to perform these operations):
# zfs create -V20G -o volmode=dev zroot/freebsdvm0
(zroot in this case is the zpool I am using)
If you are using UFS as your filesystem, and would like to test out ZFS, you can format a USB key with ZFS and use it to test out using bhyve VMs. Once you have the zvol created, you will need an install image to use the vmrun.sh script to install a bhyve VM. For this tutorial, FreeBSD-11-BETA1 will be used as the OS for the VM. Run the following command to download the FreeBSD-11-BETA1 install iso:
# fetch ftp://ftp.freebsd.org/…/…/FreeBSD-11.0-BETA1-amd64-disc1.iso
With the installation iso, we can now run vmrun.sh with some parameters to startup the bhyve VM and to install an operating system.
# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 2048M -t tap0 -d /dev/zvol/zroot/freebsdvm0 -i -I FreeBSD-11.0-BETA1-amd64-disc1.iso freebsdvm
This command will startup the VM with the console output showing in the same terminal. If using a terminal multiplexer like tmux, you can open a new tab and run this command so that you still have shell access. The -c option is used to set the number of CPUs the VM will have assigned to it, -m sets the amount of memory to be assigned to the VM, and the -t option sets the virtio-net tap interface to use with the VM. The -i option forces vmrun.sh to boot from an installation CDROM, where -I sets the location of the iso file.
Once the VM is started, everything from this point forward is the same as a standard FreeBSD installation. The only caveat is that you will want to shutdown the VM so you can remove the iso file from the command line to startup the VM. Once the OS is installed, you can start your bhyve VM with the following command:
# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 2048M -t tap0 -d /dev/zvol/zroot/freebsdvm0 freebsdvm
This setup provides a simple method to manage multiple VMs from a terminal using the vmrun.sh script, and tmux. For additional information on features that are currently supported or planned for bhyve, or additional configuration options, refer to the following FreeBSD links: 
https://wiki.freebsd.org/bhyve
https://www.freebsd.org/…/ha…/virtualization-host-bhyve.html

Post has attachment
[09/08/2015] Daemon Security, a Silver Sponsor of vBSDCon 2015

Daemon Security is a "Silver Sponsor" of vBSDCon 2015, the biennial BSD conference hosted by Verisign, Inc. The conference will bring together members of the BSD community in a series of round-table discussions including presentations on various BSD topics including system administration, networking and security. Daemon Security is proud to be sponsoring this event for a second time to help solidify the BSD operating systems as the only choice for deploying security tools and solutions. The conference is only days away, so be sure to register as soon as possible. Hope to see everyone at the Hacker Lounge to discuss Network Security with BSD, HardenedBSD and the MetaBoF.

vBSDCon 2015 at the Sheraton in Reston, VA.
http://www.verisign.com/en_US/internet-technology-news/verisign-events/vbsdcon/index.xhtml

Post has attachment
[07/27/2015] Hunter NSM - A modular platform for deploying network sensors.

Hunter NSM is a simple install script for Snort or Bro IDS with JSON logging configured for FreeBSD. This is a simplified version of the snorby install script, as the goal is to provide a modular platform to plug into any existing security architecture. The current version has been tested on FreeBSD 10.1 and HardenedBSD.
The script is available on github:
https://github.com/shirkdog/hunter-nsm

Post has attachment
[05/29/2015] zfscron - A great idea from the BSDNow podcast to backup your home directory.

First off, if you are interested in all of the latest news and information on the BSD operating systems, you should checkout the BSD Now podcast. In the segment where Allan Jude and Kris Moore discuss viewer's questions, Allan was talking about creating zfs snapshots of your home directory every 30 minutes or so. This seemed like a great idea to capture changes that may have occurred since the last daily backup in your user home directory. zfscron.sh has been added to the zfsbackup scripts and only needs to be setup as a cronjob for a user account that has privilege to perform zfs snapshots.
$ crontab -e 
(Then add the following to setup he cronjob for the user):
∗/30 ∗ ∗ ∗ ∗ /usr/home/test/zfscron.sh
Now as you work throughout the day, snapshots will be rolled every 30 minutes, allowing you to go back if you have accidentally deleted files or directories from your user account within the past hour. 

zfscron.sh on github:
https://github.com/shirkdog/zfsbackup/blob/master/zfscron.sh

Post has attachment
[05/05/2015] Mumblehard - Malware that affects Linux and BSD Systems.

Several websites linked to this writeup by Marc-Etienne M.Leveille of ESET in regards to the Mumblehard malware he discovered when working with a customer's issue. Though Linux malware (just like OSX malware) is nothing new, this software included a very interesting packer that actually detects BSD systems. The attack vector was by way of Joomla and Wordpress exploits, and an illegal copy of DirectMailer, which installs the backdoor once the software is loaded (M.Leveille, 2015). 

The malware is packed with perl code inside of an ELF binary (the Linux file format and a compatible binary file format on BSD systems). Using specific system calls, the malware can determine whether the binary is executing on Linux or BSD. The following is the specific disassembled code from the M.Leveille report:

mov eax, SYS_time; //BSD_fchdir
push ebx; //Set to NULL or 0
push eax
int 80h;//syscall 13
//saves EAX and compares
cmp eax, 0 
//jumps to a specific location for BSD systems if the value is less than 0 (negative) 
//Or jumps to specific location for Linux systems when EAX is set to current number of seconds since the UNIX EPOCH

(M.Leveille, 2015, p. 6)

There is no specific data on the number of BSD systems there were compromised, except for the compromised systems showing up in the ESET sinkholes. The key thing from this report is that even BSD systems may be unpatched, or misconfigured and as vulnerable as Linux systems when care is not taken to keep systems up-to-date, and to promptly patch web applications when vulnerabilities are discovered. To check your BSD systems, look for binaries running from /var/tmp or /tmp. The malware also sets $0 to httpd to hide itself, and it will place a cronjob to run every 15 minutes:

∗/15 ∗ ∗ ∗ ∗ /var/tmp/qCVwOWA >/dev/null 2>&1

(M.Leveille, 2015, p. 6)

Make sure you are monitoring your BSD systems and keeping your applications up-to-date.

Reference:
M.Leveille, M.E. (2015). Unboxing Linux/Mumblehard: Muttering spam from your servers. Retrieved from http://www.welivesecurity.com/wp-content/uploads/2015/04/mumblehard.pdf

Post has attachment
[04/29/2015] jail.conf hack when upgrading from FreeBSD 9.x to 10.

If you are still using FreeBSD 9.x, you will want to migrate your jails to the new jail.conf format when you upgrade to FreeBSD 10. The new jail.conf format has been around since FreeBSD 9.1:

jail.conf manpage https://www.freebsd.org/cgi/man.cgi?query=jail.conf&apropos=0&sektion=5&manpath=FreeBSD+9.1-RELEASE&arch=default&format=html

In an effort to assist with migrating to the new jail.conf format, a template file is created based on the configuration of the jails within your rc.conf file. In the following example, a jail called "testjail" is configured in rc.conf then started on a FreeBSD 10.1 system:
jail_testjail_rootdir="/usr/jails/testjail"
jail_testjail_hostname="testjail"
jail_testjail_ip="192.168.1.22"
jail_testjail_procfs_enable="NO"
jail_testjail_devfs_enable="YES"
jail_testjail_mount_enable="YES"
jail_testjail_fstab="/etc/fstab.testjail"
If you run the jail, you will receive the following output:
# service jail start testjail
Starting jails:/etc/rc.d/jail: WARNING: /var/run/jail.testjail.conf is created and used for jail testjail.
/etc/rc.d/jail: WARNING: Per-jail configuration via jail_* variables is obsolete. Please consider to migrate to /etc/jail.conf
When you use the old rc.conf variables, the jail service script will create the new format for you, in this case /var/run/jail.testjail.conf. This file can be copied to /etc/jail.conf and used to start your jail with the new format. The following is the contents of the converted jail.testjail.conf:
# Generated by rc.d/jail at 2015-04-28 13:23:43
testjail {
host.hostname = "testjail";
path = "/usr/jails/testjail";
ip4.addr += "192.168.1.22/32";
allow.raw_sockets = 0;
exec.clean;
exec.system_user = "root";
exec.jail_user = "root";
exec.start += "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_testjail_console.log";
mount.fstab = "/etc/fstab.testjail";
mount.devfs;
allow.mount;
allow.set_hostname = 0;
allow.sysvipc = 0;
}
The generated jail.conf files can be consolidated into a single /etc/jail.conf file as documented by Dan Rue (2014):
cat /var/run/jail*.conf >> /etc/jail.conf
If you do not want to run the jail, you can use the "config" option with the service script and it will create the jail.conf file based on the content of your rc.conf file:
# service jail config 
testjail/etc/rc.d/jail: WARNING: /var/run/jail.testjail.conf is created and used for jail testjail.
testjail: parameters are in /var/run/jail.testjail.conf.
If you are supporting a number of customers (and jails), you can simply copy all of the generated configs into a single /etc/jail.conf file. Tools like ezjail handle the updating of the jail.conf for you when creating or modifying FreeBSD jails. With the "config" option, you can avoid having to run the jail in order to generate the proper jail.conf file for your jails.

Reference:
Rue, D. (2014) Convert FreeBSD 10 jails from rc.conf to jail.conf. Retrieved from http://therub.org/2014/08/11/convert-freebsd-jails-from-rc.conf-to-jail.conf

Post has attachment
[01/14/2015] ZFS-Backup now supports a non-root user.
zfsbackup.sh has been modified to use a non-root user with the necessary privileges to perform ZFS send/receive and to administer snapshots. The script was initially a proof-of-concept for providing an easy way to do backups. Now the zfsbackup.sh script requires a non-root user to operate. Checkout the updated code on github: http://github.com/shirkdog/zfsbackup

Post has attachment
[12/29/2014] If you upgraded to FreeBSD 10.1 from 10.0 with ZFS, make sure you upgrade your zpools
Depending on the way you perform upgrades (freebsd-update or building from source) you may be interested in features that were added in the 10.1 release of FreeBSD for ZFS. The following options were added with the latest stable release of FreeBSD:
spacemap_histogram 
This features allows ZFS to maintain more information about how free space is organized within the pool

enabled_txg 
Once this feature is enabled ZFS records the transaction group number in which new features are enabled.

hole_birth 
This feature improves performance of incremental sends (zfs send -i'') and receives for objects with many holes. The most common case of hole-filled objects is zvols.

extensible_dataset 
This feature allows more flexible use of internal ZFS data structures, and exists for other features to depend on.

embedded_data 
This feature improves the performance and compression ratio of highly-compressible blocks. Blocks whose contents can compress to 112 bytes or smaller can take advantage of this feature

bookmarks 
This feature enables use of the zfs bookmark subcommand.

filesystem_limits 
This feature enables filesystem and snapshot limits.
You can validate whether your zpool can be upgraded by running "zpool status" and observing the following output:

status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details.
You can view zpool-features(7) at the FreeBSD site: https://www.freebsd.org/cgi/man.cgi?query=zpool-features&sektion=7&manpath=FreeBSD+10.1-stable
It is important to note that any software that does not support these features may have issues once you run the upgrade. If you are certain that you will not have any issues, you can simply upgrade your zpools by running the following command (in this example, I am upgrading my bootpool):
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
(This was required before I could run zpool upgrade)
# zpool upgrade bootpool
This system supports ZFS pool feature flags.

Enabled the following features on 'bootpool':
spacemap_histogram
enabled_txg
hole_birth
extensible_dataset
embedded_data
bookmarks
filesystem_limits

Bookmarks and filesystem_limits will be useful features for managing your ZFS datasets.
Wait while more posts are being loaded