Termin:Chaotic-Congress-Cinema-28C3 Nr. 25

Aus Attraktor Wiki

Wechseln zu: Navigation, Suche

Chaotic Congress Cinema Nr. 25


The date "2012/06/27 20:30:00 PM" was not understood.The date "2012/06/27 20:30:00 PM" was not understood.


The date "2012/06/27 22:00:00 PM" was not understood.The date "2012/06/27 22:00:00 PM" was not understood.

Needs to be there, but does not need to be seen by a visitor Yes

Wir schauen uns die Aufzeichnung von Congress-Vorträgen an. Du bist herzlich eingeladen, in den Clubräumen im Mexikoring 21 aufzutauchen und mit uns die Talks anzuschauen und zu diskutieren. Es wird Getränke und Knabberkram zu moderaten Preisen geben. Falls Du kein CCC-, CCCHH- oder Attraktor e.V.-Mitglied bist, macht das überhaupt nichts: Alle Gäste sind gern gesehen. :-)

Weitere Informationen unter Chaotic Congress Cinema.

Eating in the Anthropocene

Transgenic Fish, Mutagenic Grapefruits and Space Potatoes

Over the last few years hackers have begun to take a larger interest in food, gastronomy and agriculture. For many in the community the ability to create DIY molecular gastronomy hardware and recipes is an obvious entry point. This talk extends some of these early investigations beyond the kitchen and the chemical properties of food by looking at specific cultivars, food technology organizations, and connections between food systems, ecosystems and planetary change.

Part 1 of the talk explores some of the more bizarre and interesting biotechnologies and genomes that make up the human food system on planet earth, including Chinese Space Potatoes, Mutagenic Grapefruits and Glowing Sushi.

Pat 2 of the talk presents ideas of food system redesign particularly relevant to hackers and food explorers: utopian cuisines, resilient biotechnologies and eaters as agents of selection.

In Part 3 we provide access to resources and propose interesting projects for black hat food hackers, DIY BIO foodies, and prospective food security researchers, such as mining the IAEA's database of radiation breeding, eating things that weren't meant to be eaten and defending agricultural biodiversity.

By introducing less known stories from the history of food and technology, and providing access to resources we hope to get more hackers curious about exploring, questioning and redesigning our human food systems.

BIO: Zack Denfeld & Cathrine Kramer run the Center for Genomic Gastronomy an independent research institute that studies the genomes and biotechnologies that make up the human food systems on the planet. They are currently in residence at Art Science Bangalore and a curating a show on the future of food at the Science Gallery in Dublin Ireland.

Don't scan, just ask

A new approach of identifying vulnerable web applications

For years, we tried to identify vulnerable systems in company networks by getting all the companies netblocks / ip addresses and scanning them for vulnerable services. Then with the growing importance of web applications and of course search engines, a new way of identifying vulnerable systems was introduced: "Google hacking".

However this approach of identifying and scanning companies ip addresses as well as doing some Google hacking for the (known) URLs of the company doesn't take all aspects into account and has some limitations. At first we just check the systems which are obvious, the ones that are in the companies netblocks, the IP addresses that were provided by the company and the URLs that are known or can be resolved using reverse DNS. However how about URLs and systems that aren't obvious? Systems maybe even the company in focus forgot? Second, the current techniques are pretty technical. They don't take the business view into account at any point.

Therefore we developed a new technique as well as framework to identify companies’ web pages based on a scored keyword list. In other words: From zero to owning all of a company’s existing web pages, even the pages not hosted by the company itself, with just a scored keyword list as input.

Systems that are hosted by third parties, web pages that were just released for a marketing campaign, maybe even by a third party marketing company but within the name of the company we want to check? Possibly not even the company does remember all the web applications and domains that are running under his name. These systems/applications won’t be detected using traditional techniques and thus impose a potential security risk for the company. Second, the current techniques are pretty technical. They don't take the business view into account. That means, we try to identify certain applications using technical information like version banner or the comapnies ip addresses in order to identify his systems. But how about the other way around, trying to identify applications and systems by using the company’s business data (e.g. product names, company names, tax identification numbers, contact persons, …) and then test the identified systems and applications for vulnerabilities?

That is what we did. The idea is to build up a scored keyword list for the company in focus. This list contains general keywords like the company name, product names, more detailed keywords like an address contained in imprints and very specific keywords like the companies tax number. Every keyword in that list is then rated by human intelligence. Which means specific keywords do have a higher scoring than general keywords. In the next step a spider uses these keywords to query search engines like bing, google, etc. for the keywords and stores all the web sites URLs identified in a database with their scoring. If a web site that already is in the database is found for another keyword, just the score of that entry is increased. At the end, we get a list of websites that contained one or more of the keywords, along with a scoring for each web site. Then the URL is taken and checked whether it contains one of the keywords (e.g. company name). If this is the case, the scoring of the page is increased again. Then for each entry the FQDN as well as the ip is resolved and a whois query is executed. If that whois record does contain the company name, the scoring is increased again. Furthermore the country codes are used to remove results which are not in the target country.

At the end of that process, we do have a list of URLs and FQDNs that could be found using company specific key words. Furthermore that list is scored. Since during that process you get (based on your keyword list) hundred thousands of unique hits, you have to minimize that list. Therefore we did some research on the results generated and found a decent way to minimize the results to an amount that can be checked manually by a human. Then those identified company web pages are passed to a crawler that just extracts external links from those pages, with the idea that correct company pages might link to other company pages, and integrates them to the results list. Using these technique in practice it is possible to identify a lot of web sites hosted (even by third parties) for one company.

During the crawling process not just external links are extracted but all forms, HTTP parameters as well as certain parts of the web content are stored. Thus besides a list, we do have a "mirror" of the web page as well as the forms and dynamic functions that pose an attack surface.

The information collected can then be used as input to special analysis modules. For some of our projects we integrated WAFP (Web Application Finger Printer), SQLMap and other well known tools as well as some other self written fuzzers and fingerprinters into that process. This way the whole process, from identifying web pages belonging to a certain company up to analyzing those for vulnerabilities can be totally automated.

In other words: From zero to owning all of a company’s existing web pages, even the pages not hosted by the company itself, with just a scored keyword list as input.

During our talk we will present our idea as well as our approach of identifying vulnerable web applications that belong to a certain company, based on business data. Furthermore we will explain how our framework is structured and how it does the searching as well as the vulnerability assessment in an automated way. So everybody who is interested will be able to implement his own version or adapt certain ideas for his projects. Besides just telling you how it could work, we will also present our framework that performs all of the steps described above automatically in a demo.

Diese Seite wurde zuletzt am 11. April 2012 um 17:46 Uhr geändert. Diese Seite wurde bisher 1.485 mal abgerufen.