Termin:Chaotic-Congress-Cinema-28C3 Nr. 27
Aus Attraktor Wiki
Wir schauen uns die Aufzeichnung von Congress-Vorträgen an. Du bist herzlich eingeladen, in den Clubräumen im Mexikoring 21 aufzutauchen und mit uns die Talks anzuschauen und zu diskutieren. Es wird Getränke und Knabberkram zu moderaten Preisen geben. Falls Du kein CCC-, CCCHH- oder Attraktor e.V.-Mitglied bist, macht das überhaupt nichts: Alle Gäste sind gern gesehen. :-)
Weitere Informationen unter Chaotic Congress Cinema.
Print Me If You Dare
Firmware Modification Attacks and the Rise of Printer Malware
Network printers are ubiquitous fixtures within the modern IT infrastructure. Residing within sensitive networks and lacking in security, these devices represent high-value targets that can theoretically be used not only to manipulate and exfiltrate the sensitive information such as network credentials and sensitive documents, but also as fully functional general-purpose bot-nodes which give attackers a stealthy, persistent foothold inside the victim network for further recognizance, exploitation and exfiltration.
We first present several generic firmware modification attacks against HP printers. Weaknesses within the firmware update process allows the attacker to make arbitrary modifications to the NVRAM contents of the device. The attacks we present exploit a functional vulnerability common to all HP printers, and do not depend on any specific code vulnerability. These attacks cannot be prevented by any authentication mechanism on the printer, and can be delivered over the network, either directly or through a print server (active attack) and as hidden payloads within documents (reflexive attack).
In order to demonstrate these firmware modification attacks, we present a detailed description of several common HP firmware RFU (remote firmware update) formats, including the general file format, along with the compression and checksum algorithms used. Furthermore, we will release a tool (HPacker), which can unpack existing RFUs and create/pack arbitrary RFUs. This information was obtained by analysis of publicly available RFUs as well as reverse engineering the SPI BootRom contents of several printers.
Next, we describe the design and operation a sophisticated piece of malware for HP (P2050) printers. Essentially a VxWorks rootkit, this malware is equipped with: port scanner, covert reverse-IP proxy, print-job snooper that can monitor, intercept, manipulate and exfiltrate incoming print-jobs, a live code update mechanism, and more (see presentation outline below). Lastly, we will demonstrate a self- propagation mechanism, turning this malware into a full-blown printer worm.
Using HPacker, we demonstrate the injection of our malware into arbitrary P2050 RFUs, and show how similar malware can be created for other popular HP printer types. Next, we demonstrate the delivery of this modified firmware update over the network to a fully locked-down printer.
Lastly, we present an accurate distribution of all HP printers vulnerable to our attack, as determined by our global embedded device vulnerability scanner (see ). Our scan is still incomplete, but extrapolating from available data, we estimate that there exist at least 100,000 HP printers that can be compromised through an active attack, and several million devices that can be compromised through reflexive attacks. We will present a detailed breakdown of the geographical and organizational distribution of observable vulnerable printers in the world.
(different from the main SoC) and are currently attempting to locate code related to tracking dots. Perhaps we will have some results by December. In any case, HPacker will help the community to do further research in this direction, possibly allowing us to spoof / disable these yellow dots of burden.
Deceiving Authorship Detection
Tools to Maintain Anonymity Through Writing Style & Current Trends in Adversarial Stylometry
Stylometry is the art of detecting authorship of a document based on the linguistic style present in the text. As authorship recognition methods based on machine learning have improved, they have also presented a threat to privacy and anonymity. We have developed two open-source tools, Stylo and Anonymouth, which we will release at 28C3 and introduce in this talk. Anonymouth aids individuals in obfuscating documents to protect identity from authorship analysis. Stylo is a machine-learning based authorship detection research tool that provides the basis for Anonymouth's decision making. We will also review the problem of stylometry and the privacy implications and present new research related to detecting writing style deception, threats to anonymity in short message services like Twitter, examine the implications for languages other than English, and release a large adversarial stylometry corpus for linguistic and privacy research purposes.
Stylometry is the study of authorship recognition based on linguistic style (word choice, punctuation, syntax, etc). Adversarial stylometry examines authorship recognition in the context of privacy and anonymity though attempts to circumvent stylometry with passages intended to obfuscate or imitate identity.
This talk will introduce the open source authorship recognition and obfuscation projects Anonymouth and Stylo. Anonymouth aids individuals in obfuscating their writing style in order to maintain anonymity against multiple forms of machine learning based authorship recognition techniques. The basis for this tool is Stylo, an authorship recognition research tool that implements multiple forms of state-of-the-art stylometry methods. Anonymouth uses Stylo to attempt authorship recognition and suggest changes to a document that will obfuscate the identity of the author to the known set of authorship recognition techniques.
We will also cover our recent work in the field of adversarial authorship recognition in the two years since our 26C3 talk, "Privacy & Stylometry: Practical Attacks Against Authorship Recognition Techniques." Our lab has new research on detecting deception in writing style that may indicate a modified document, demonstrating up to 86% accuracy in detecting the presence of deceptive writing styles. Short messages have been difficult to assign authorship to but recent work from our lab demonstrates the threat to anonymity present in short message services like Twitter. We have found that while difficult, it is possible to identify authors of tweets with success rates significantly higher than random chance. We also have new results that examine the ability of authorship recognition to succeed across languages and the use of translation to thwart detection.
This talk will also mark the release of an adversarial stylometry data set that is many times larger than our previous release. This data set, provided by volunteers, includes at least 6500 words per author of unmodified writing as well as sample adversarial passages intended to preserve the anonymity of the author and demographic information for each author.
The content of this talk will be relevant to those with interest in novel issues in privacy and anonymity, forensics and anti-forensics, and machine learning. All of the work presented here is from the Privacy, Security and Automation Lab at Drexel University. Founded in 2008, our lab focuses on the use of machine learning to augment privacy and security decision making.