Wednesday, November 3, 2010

fragile software systems & risks in homogeny

well there are some things which reportedly do not belong on blogs... grrr... so here's some more of the drivel you've come to expect ;)

this here is one of those 'not sure if i should laugh or cry' links:

the most advanced fighter in the world ... was able to rack up an impressive 241-to-2 kill ratio [during war-games] ... [but] was felled by the International Date Line (IDL) ...

When the group of Raptors crossed over the IDL, multiple computer systems crashed on the planes. Everything from fuel subsystems, to navigation and partial communications were completely taken offline. Numerous attempts were made to "reboot" the systems to no avail ... the Raptors had their refueling tankers as guide dogs to "carry" them back to safety ... They had no communications or navigation


summarized pseudo misquote: "aircraft which cost $125+ million USD apiece were [disabled by] a few lines of computer code"

the F22 IDL story made me wonder if the F/A-18G that 'killed' an F-22 was able to do so particularly because of electronic warfare capabilities...? no idea, but i'd love to ask that Grizzly driver ;)

there might be a couple of take-aways here...

#1 - increasing reliance on critical computerized systems which are not backed by redundant systems and are fragile will present significant new risks. think about the F-22 design philosophy versus my favorite airborne weapons platform: the hawg!

the A-10 has "triple redundancy in its flight systems, with mechanical systems to back up double-redundant hydraulic systems ... [and] is designed to fly with one engine, one tail, one elevator and half a wing torn off." you don't have to google far on the A-10 to find a variety of stories about how well it performs under the stress of combat operations. reportedly, "the 165 Warthogs that flew in Desert Storm [had a] 95.7% mission capable rate ... the highest sortie rate of any USAF aircraft ... [while] roughly half of the total A-10 force supporting Desert Storm suffered some type of battle damage ... [just] five A-10s were lost in action".

yes, physical survivability is very different than electron system fragility, but there may be parallels. if the F-22 is tough to target with traditional weapon systems, maybe a better approach is a big ass radio antenna and a decent fuzzer ;)

#2 - highly homogeneous systems deployed into production can fail spectacularly. relatively survivable critical systems like DNS root servers are deployed on varying hardware and software to avoid this issue. once the JSF becomes the mainstay fighter of western nations, then a similar 'vulnerability' could theoretically disable entire air forces. don't worry, all JSF code is written in C++ (wikipedia) so there won't be *any* software induced failure points... lulz...


ps: speaking of crappy code and fragile software, i recently discovered that the back-end of sslvis is b0rked. i'll be getting it fixed up, getting features added to the back-end, and moving it out of beta as soon as i can... sorry!!

Thursday, October 7, 2010

recent NSA history via Nova

some crazy tidbits in there... notably lacking in any conspiracy-foo... pbs ftw! :D

haha, so i can't embed hulu here? whatev....

http://www.hulu.com/watch/182504/nova-the-spy-factory

Wednesday, August 4, 2010

strategic subversion?

<ramble>

my boy @zenfosec was schoolin me on kung-foo flix the other day, and we got to talking about how blue-ray rips and dvd capacity seem to line up and then started wondering about how long until we see previously unknown brands of cheap electronic media players at superstores which can play the format in question... (now?)

anywho, one might observe that 'traditional'/mainstream/'western' manufacturers don't produce these devices but capitalist markets fill consumer demand in this area.

one might also observe that a significant number of rip nfo files appear to come out of china.

that could lead into speculation of whether or not a socialist culture that reportedly 'thinks' in terms of centuries and longer might make a conscious effort to undermine capitalism by using capitalism against itself...?

this might be in line w/ the idea of mass producing offensive infosec 'armies'. btw, i am very disappointed that the talk about this field outta taiwan got pulled from bh/dc. if anyone wants to share the slides, hit me w/ a gpg key ;) (also, i got to chat w/ some super smart folk in vegas n learn some nifty stuff, props to everyone involved :)

anyway... insofar as unintended consequences and blowback, it might be fair to ask if this would be a risky strategy. when a traditional soldier is discharged and leaves his barracks he gives back his primary weapons. if you imagine forward a couple decades to legions of retired technically capable trained electronic 'subversives'(?), what will the world look like to political powers seeking to control information? lots of shades of grey in there prolly ;)

</ramble>

greetz n 敬 to peeps w/ comments n the operanos chillin in the back too ;)

Wednesday, June 23, 2010

privacy trends

[premise]
the ability to collect and process massive amounts of information allows for a world where anonymity is minimized


[tracking]
i thought i remembered reading that investigators used public surveillance camera data to back-trace the craigslist killer philip markoff, but a quick glance or three at google didn't confirm that at all...

either way, the same idea played out in the whole dubai / mossad deal. cameras are all over, and if you have access to a lot of them you can start traveling back in time in a sense, back-tracing an event in your observable realm...

schneier has pointed out at length that to-date facial recognition false-positive rates render such systems ineffective. but anecdotal evidence suggest a different story when human analysts can quickly review large sets of public video data.

dubai wants more cameras, and technology drivers are expressing interest in mass video collection for further automated and auto-augmented manual analysis.

uav technology is already migrating to law enforcement applications... military developed gunshot detectors have been deployed as well. military style surveillance technology appears to be integrating into daily life relatively quickly.

automated license plate detection technology is growing, and in some places police have real-time access to computerized records which include details beyond court convictions or even incidents where a court was involved.


[physical evasion]
this brings up the whole issue of evasion. in theory tech like this could be expanded to cover more than faces. i hear there are higher grade cameras that filter IR, so this isn't entirely reliable, but then most cameras will be cheap. then there's also the fact that a white shiny blob of a person walking around might attract attention to humans and robots watching the video feed. it might be effective if employed w/ some planning as to when it is activated, and might be augmented by employing physical disguise as part of the plan if you wanted to be concealed moving to and from a location.

a more nifty technique would be lens detection and targeted energy overload of cameras (possible?), but beware false positives from peoples eyes ;) also, the wake of camera failures would be an alarm that something was going down and where it was happening


[secure comms]
there really are rooms where government agencies are sucking up massive amounts of data (presumably including voice data routing over digital transports) which are apparently important enough to invoke 'state secrets' to defend. it seems like major voip providers like skype are cooperating by giving states access to at least targeted conversations. and there seems to be industry enough to support manufacture of ssl mitm devices.

as an aside, big ups to moxie for releasing the redphone app to re-give average people the ability to have a semi-anonymous phone conversation. a friend and i were in the planning stages of a similar app built, but that damn moxie clearly had more motivation, time, and ability ;)

anywho, after september 11 2001 a US lt colonel and others stood up to talk about able danger, which was a mass data-mining and information processing effort. it takes approx 16-22 years of service to attain the rank of lt colonel, so after the government says "we don't know what he's talking about" and there are claims that evidence disappeared you've kinda gotta ask "are these people crazy to fuck up their lives for 15 minutes of fame, or does the government maybe have some interest in hushing the capabilities of massive data analysis...?"

the book 'the rootkit arsenal' calls full packet capture the worse-case scenario for a root-kit operator. you dig? collecting tons of information gives you significant potential detection capabilities.

anecdotal evidence indicates that anonymous voice and data connections may not be readily available as services you can purchase.


[wikileaks / nation-states]
so we get to a place where the founder of a site dedicated to exposing information inconvenient to massive entities is apparently laying low from a nation-state...? according to da twittaz one of the last people he was seen with was valarie plame... at first i was thinking she was sibel edmonds, but all these covert secret conspiracy women just had me all mixed up ;)


[identity]
so there's always a weak link somewhere... and it seems to me that in a world where automated detection and tracking is growing, the weak link might be identity. if you can build ghost identities you can travel and exist in anonymity so long as you don't make anyone notice you, much as humans have been doing far into our past... but if you only have your natural identity then many of your words, motions, and actions may be available for later analysis to an interested party.

information may want to be free, but it seems some people want to horde it...

Thursday, May 27, 2010

novel(?) anti-xss technique caught my eye

saw this a few weeks ago, and it stuck out b/c i'd never seen or heard of anything like it... i ran it past a few peeps i respect and they'd never seen it, so i figured i'd share :D

it's very common to find XSS in search functions on web apps where the text a user enters into the form is reflected onto the page after the form is submitted. so you hit an app and search for "foo" and on the search results page you get back the search form is populated with "foo" which you just searched for. well if someone constructs a malicious link like:

http://someapp.somedomain.edu/search.htm?q=foo"><script>evil code here...

you end up w/ an xss attack assuming the app is poorly written...

typically during web app assessments you've gotta go smack the developer and tell them to validate their inputs and encode their outputs, but this time it took me a minute to figure out what was going on... sooooo here's the resulting html src of a little PoC i put together and tested w/ google app engine and ff3.x:


<html>
<head>
<title>xsstest</title></head>
<body>
<center>
<form name='testform' action='javascript:alert(testText.value);' id='testform'>
<input name="testText" id="testText" tabindex="1" onkeyup="javascript:alert(this.value)" />
<input type="submit" name="btnTest" id="btnTest" value="testfoo" onclick="" />
</form>
</center>
</body>
</html>


so wtf is that? ok, this was based on a search form on an ajax-ish web app. there was more to the real app, but this includes all the relevant bits. when i searched on the app, i saw my inputs were reflecting in my browser so i went to check if they were html encoding them server side... but the value i was inputting in the search field never showed up in the page src... ermm, wot?

well, here's what i think is happening:


<input name="testText" id="testText" tabindex="1" onkeyup="javascript:alert(this.value)" />


note that the "value=" tag is missing above. that makes the value attribute null when the server first serves it. when you use the form the app acted on your inputs using stuff like onkeyup/onkeydown, but when the user data needs to be read, it's done using the object oriented "this." convention which allows an object to refer to itself.

when you submitted the form the app would process your inputs, but the actual value you enter is never written to the page by the server. it exists only in memory on your client machine and is never written into html src. when the page refreshes your client browser renders the input element and snags the 'value=' value from memory and thus seems to avoid those pesky output encoding issues...?

anywho, it looks legit to me, but it's not a game changer or anything. kinda limited in it's application, and doesn't do anything for sql injection, csrf, etc.

but still kinda nifty mb ;)

Friday, May 7, 2010

rwnin@firefox-extension: ./sslvis -h -vvv

[sslvis: firefox extension]
https://addons.mozilla.org/en-US/firefox/addon/158232/


[background]
iirc, the basis for a lot of security assumptions on the modern intert00bz come down to everyone trusting that the CAs will keep their promise to not issue bullcerts (technical term: bullshit certificates).

but it looks like they are issuing them to governments and intelligence agencies:

http://www.wired.com/threatlevel/2010/03/packet-forensics/
http://arstechnica.com/security/news/2010/03/govts-certificate-authorities-conspire-to-spy-on-ssl-users.ars


[not to be a bitch]
i mean, in theory all important comms should go over crypto that you manage and trust... and this can be used for 'good'. but that doesn't change the fact that most people use these communication channels for a variety of reasons with an expectation of near absolute privacy.


[so the theory goes]
they are hunting someone using 'secure' public inet services and wanna do a targeted interception or run some pattern matching on a network near afghanistan to find someone. so they do a network level tap on a choke point in the networks serving the region.


[and?]
the CAs gave em a valid cert, so they plug in their device and they're doin cleartext intercepts on everything going through that region. the cert is valid, it's made out to google or sekritbadguylayer.com or whoever.


[massive qualifier]
so do you think that cert the CA gave some snooping party is an exact match of the legit cert of the one running in production?

i'm gonna guess no for the following reasons:

1) the CA would be completely destroying the trust model (bad for business) if they couldn't revoke the certs
2) maybe they simply can't reproduce a cert they issued because data wasn't kept or conditions can't be reproduced (?)


[not my idea]
hashes are just dang hard for people to pay attention to, cause they're huge random strings. but a few years back at bh/dc someone (kaminsky? ranum? sober? no...) was talking about how you can visually represent that same hash value as a series of colors, and all of the sudden it's really easy for humans to notice when a hash changes.


[soooo]
boiling a hash into a word is what sslvis does, and it's a very similar concept. if you hit gmail one day and your word is 'paradox' when it always used to be 'apple' you can easily notice that those words have changed. normally you wouldn't be alerted to the change because there are no warnings or indicators for changes to another valid cert.


[verification?]
there may be a completely legit reason the cert changed. certs expire, disks fail, load balancers exist, devices change, etc etc etc. that's why sslvis sends the host, domain, tld, and hashword value to an external app server:



the default server is hosted on google app engine and feeds the info into a tagcloud:



it includes a (crappy) search feature which let's you visualize the proportions of the certs other people are seeing in real-time. it is slated to include clouds which show the results over time (vapor tagclouds atm ;).

so if your google word is paradox, and that's what everyone else is seeing for the last hour, you're prolly ok to feel kinda sorta mb privatish... kinda...

but if your google word is paradox and there are no other results or just a couple others there is a stark visual cue in the juxtaposed sizing in the tag cloud... this let's you know you're experiencing an anomaly in your connection, and mb you shouldn't proceed...?



in the img above, it looks like maybe a non-malicious anomaly, since canvas is the normal word for www.google.com from what i've seen... (should prolly implement a word search function)


[communist socialist conspiracy?!?!]
well it kinda democratizes and visualizes the whole CA trust issue. a sort of sunshine for crypto maybe? again, not a new idea...


[sidenote]
what about wildcard ssl certs? in theory this detects them too...?


[erm, privacy?]
yea, there is a definite loss of privacy here. but before anyone rants about it, you kinda need to understand that there are rooms in major network facilities where state actors are tapping massive networks on a massive scale. the fact that you go to ilovefarmanimals.com or someshadysite.com is already potentially known to a potentially interested party, even if the details of what you are doing are hidden in the SSL channel.

oh, well that and you can use regexp exclusions or just disable reporting. by default rfc1918 networks are excluded. a trailing asterisk let's you know that a value wasn't reported.

also, you can choose your remote reporting server in the extension options and the source is available so you can just light up one for your own network (and/or just write your own interface to capture the data, it's just a couple HTTP GET parameters).


[what else can it do?]
well, if you capture ip information you track and geo-locate anomalies in near-real time... that could be kinda cool i think...

it would be pretty easy for the app to report back to your browser that your result is way off from the current norm and actively alert you somehow...?


[kludges?]
well... erm... a lot... but right now the data is exported via xmlhttp requests that fire each time you change focus on the tab, and not for each actual request you make... firing on each request also kinda sucks for sites with frequent requests. keeping tabs on what requests are made and how often is probably the way to go.

(btw, i use a secondary xmlhttprequest because you can't read the public hash for an active connection from javascript easily afaik)

there are more kludges... check it out for yourself, i'm def open to suggestions ;)


[downsides]
you're losing (a ton of) entropy, so there's an increased chance an attacker could find a collision and be really tricky. right? a bigger wordlist and highly efficient hashing algo helps there mb...? it might make sense to just report the actual hash back to the server, or pass it as a sanity-check parameter. not sure what (if any) privacy ramifications that might have.

also the app currently has no anti-fraud capabilities. rate-limiting prolly makes sense server-side, and client-side the user could be subjected to some captcha-esk process that issues a cert to check for humans vs cylons.

the app is currently cleartext comms, so a mitm could mitm you when you use it ;)

oh, and this is all only works if the snoopers aren't getting exact copies of certs, either from the CAs or from a compromised certificate store.


[other?]
there are some bugs and unimplemented features... and the app is still in the sandbox since i'm not done testing and adding features.


[coda]
that's all... for now... :)

Monday, April 19, 2010

why it sucks to be an infosec defense guy & an example of real-world cyberwar

i got a chance to listen to Richard Clarke talk w/ Terry Gross on Fresh Air today, and while it was full of a lot of the things that suck about listening to mass-media talk about infosec, there were definately some gems...

i'd say it's worth a listen... anywho, onto the content:


[why it sucks to be an infosec defense guy]

@ 02:20

"somehow from a thumb-drive, a virus a worm got into the classified network, which is supposed to be a closed loop network, of CENTCOM and attacked compromised thousands of computers of our warfighters in Iraq and Afghanistan and probably exfiltrated large amounts of information to someplace in the internet [in December 2008]"

ok, so this blurb says two things to me.

1) "it attacked an infected thousands of computers on a closed-loop network" - here's a lot of assumption, but when i hear about worms spreading in closed networks, it makes me say 'oh you didn't apply security patches to those machines because you thought they were safe'. unless this thumb-drive was full of 0day, this incident is classic failure to follow best-practices because you assumed some other layer of defense would keep you safe.

2) and wait, was this "closed-loop" network airgapped? well, clearly it wasn't if you were able to exfiltrate any data out of it to the internet. and even if it wasn't an airgapped network, why the #@%(*@#%* are you letting this classified military network which supports men & women with guns TALK TO THE INTERNET?!?! srsly guys, you know firewall policies can be set to block traffic leaving your network too, right?

this kind of stuff just sucks. here you have a network which should be one of the most secured in the world, and has tons of resources dedicated to protecting it, and it falls flat on it's face w/ two well known best practices. when .mils aren't doin this stuff, you know that corp networks are probably worse. how can you tell me to help protect you if you're unwilling to patch and control your network? and you're surprised when bad things happen to you? srsly?

we know how to do so much good defensive stuff, but it's a lot of mundane process and procedure. it takes cycles and people, and it takes some documentation and training, some audit and enforcement, and it takes some effort and work. and it seems like no one is doing it... booo :(

oh well... c'est la vie


[an example of real-world cyberwar]

as a bonus...

remember when Israel bombed some secret facility in Syria? well, according to Clarke, that attack was performed by Israeli F-15s and F-16s which are very not-stealthy fighters. so a reasonable question is why weren't these planes shot at/down by Syrian air-defense networks?

according to Clarke, the Syrians saw nothing on their radar at the time and after the fact because "the Israelis had used cyberwar as part of a traditional attack. They had taken control of the Syrian air-defense system, and made all of the radars look like there was nothing in the sky, even though the sky was filled with Israeli fighter-bombers."

anyway, just wanted to include this because so many people in the infosec game seem to think that cyberwar can only be a digital-pearl-harbor type catastrophic attack. as if the entire attack will be encompassed by bytes on a wire. in my opinion cyberwar capabilities can be used effectively as a small part of larger tactical engagements. dismissing cyberwar as a fantasy ignores real-world realities and capabilities which are apparently being put to use today by state actors, and possibly others...

Friday, March 5, 2010

more xss introduced by security devices

soooo, i found this a while back, and it may be patched or who knows... but i (re?)'rediscovered' it n kinda had to be snarky n vocal about it... such a surprise, i know ;)

it's kinda similar to the xss introduced by an intermediate security device post from a bit back...

i see a little light-weight web server i'm not familiar with, and kinda assume it had to be made in the last i donno... 10 years? so these guys who made it are sitting around a table and they're like:

"hey, let's make (or buy) this simple http server that just does some simple stuff really well and *nothing else*, and use it as a workhorse for these expensive widgets we want to sell!"

and later, someone says:

"man, we need a simple http server to run this security service that authenticates unknown users" and they build it into a security-ish widget...

an unauthenticated user requests a page:

GET /somethin.aspx?foo=bar HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; ...)
Accept: */*
Pragma: no-cache
Host: somehost.domain.tld

...



and the little server that could redirects them to authenticate:

HTTP/1.1 200 OK
Server: ********gw
Content-Type: text/html
...

<HTML><HEAD><TITLE>***********************Authentication Redirect</TITLE><META http-equiv="Cache-control" content="no-cache"><META http-equiv="Pragma" content="no-cache"><META http-equiv="Expires" content="-1"><META http-equiv="refresh" content="1; URL=https://an.auth.svr/login.html?redirect=http://somehost.domain.tld/somethin.aspx?foo=bar"></HEAD></HTML>


of course the server encodes the output reflected in th-...


GET /somethin.aspx?foo=bar"></head><body><script>alert('wot?')</script></body> HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; ...)
Accept: */*
Pragma: no-cache
Host: somehost.domain.tld

...


HTTP/1.1 200 OK
Server: ********gw
Content-Type: text/html
...

<HTML><HEAD><TITLE>***********************Authentication Redirect</TITLE><META http-equiv="Cache-control" content="no-cache"><META http-equiv="Pragma" content="no-cache"><META http-equiv="Expires" content="-1"><META http-equiv="refresh" content="1; URL=https://an.auth.svr/login.html?redirect=http://somehost.domain.tld/sometin.aspx?foo=bar"></head><body><script>alert('wot?')</script></body>"></HEAD></HTML>


you've gotta wonder... how many code releases and updates has the server gone through, since... ummmm.... 2005? You know, have you thought about output encoding in the last *5 years* since an xss worm made headlines w/ mainstream media? how much revenue did this bring you in the last 5 years? annnnnd how much is a simple static or dynamic analysis?

that's not to say that this looks wormy, for a couple of reasons. plus, modern anti-xss filters seem to protect against it.

one interesting bit is that the redirect values are completely arbitrary and seamless in the browser, which mb makes a targeted attack easier because the victim URL can be anything...?


***note: the vuln here is _not_ in msnbc.com***
***another note: ie8 anti-xss filter disabled for this screenshot***

other than that, it doesn't look like anything terribly special really, and someone has prolly already posted somethin about it somewhere, cause you just kinda trip over it if get within 30 feet of the server...

anywho, that's all for now ;)

Wednesday, March 3, 2010

flash is dead... long live... *yawn*

well html5 has been rumbling around and 'maturing' for a while now...

i was recently introduced to the youtube html5 beta via fark iirc (linkfail). anywho, the article quoted some steve jobs flash/ipad/drama foo, and also included some nice quotes about epic flash failure from charlie 'i pwn n00b devs in my sleep' miller XD

sooo, throw a supported user-agent to youtube annnndddd... fail. firefox supports html5, but only some open video format, yada yada yada...

wellll, i wonder if there's anything interesting in the youtube src?

<snip>
<script type="text/javascript">
var yt = yt || {};
yt.preload = yt['preload'] || {};
yt.preload.start = function() {
var img = new Image();
yt.preload.VideoConnectionReference = img;
img.onload = img.onerror = function () {
delete yt.preload.VideoConnectionReference;
};
img.src = 'http://v18.lscache2.c.youtube.com/generate_204?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor&fexp=904020%2C902306&algorithm=throttle-factor&itag=34&ipbits=0&burst=40&sver=3&expire=1267621200&key=yt1&signature=7A4D3513CEE589B3E53529C08C6BDEA27DF80C1F.96F3E4606263CB33E9198662204B49FD2E4B98F7&factor=1.25&id=92e467b5ad5ad0bf';
img = null;
};
yt.preload.start();
</script>
</snip>


soooo, i know *nothing* about html5 atm, but that's what jumped out at me...

scripts with interactions on the network layer, some id-foo, expire-foo, and key-foo... sounds like an interesting attack surface at a minimum ;)

i'll confess i downloaded chrome to try out the html5 vid... i'm glad i did for the new spinny loading graphic and this epic quote:

'all the bugs have been worked out of flash'
- @pzembashis


(btw, nice work misrepresenting html5 support in browsers pal :P [jkjk!])

lulz... anywho, security aside, sry steve jobs but my cpu wasn't very happy even w/o fullscreen... and man, to think these people are trying to go against flash w/ chop like that, ick :-\

prolly some interesting stuff to find in the rfc-ish linkage...?

Friday, February 5, 2010

datapyning (tool release)

okok, i'm always writing stuff and never getting it released, so this time i've kludged up a tool and dropped it on google code:

http://code.google.com/p/datapyning

just a little python script that will query search info (to google atm, others in next rev) and pull down all the returned results. the idea is to allow you
to collect files/data en-mass and store it away for further analysis later...

[purpose]

so is there any security relevance here? well i built the tool to archive data for a security research project i've been kicking around. i see it being useful for a variety of research and information discovery tasks, but i donno if anyone else will.

ultimately, the idea came from me trying to find some info i'd seen before and coming to the conclusion that the data had poofed into the aether. if you aren't archiving information you care about, is anyone else???

this tool might help you archive some of that data for your purposes...


[examples]

~ grab up to 20 PDFs posted in the last week w/ the search phrase 'free', verbosely

[user@box datapyning]$ ./datapyning.py -S ./null.list -n 20 -f pdf -t w -s free -v

~ grab up to 100 .xls files in the last year w/ .com, .org, and .net domains w/ search phrase 'profit' quietly into a dir called foo

[user@box datapyning]$ ./datapyning.py -f xls -t y -S ./small.list -s profit -q -d foo

~ grab up to 100 results from the last 24 hrs for each tld w/ the search phrase 'default.password'

[user@box datapyning]$ ./datapyning.py -s "default.password"


[limitations]

* searching for -s 'foo bar' makes google barf, but -s 'foo.bar' works... wtf, mah bad, def on the list to get fixed :(
* other 'advanced' search features (intext:, etc) aren't accesible via cli and mostly not through the search phrase
* currently the tool kinda expects search frequencies >= 1 per day (result dir contains dirs named by search date)
* search domains/sites aren't handled on the cli (files w/ crlf delimeters only)
* max of 100 records per search
* no status bar for larger downloads (it will timeout, make note, and move on if d/l fails)
* no rate limiting, sooo it will use the bandwidth it can
* not sure if the way download file names are genericized and logged makes sense
* tied to google (but potential for either modularized search providers or mb search agnostic)

Tuesday, February 2, 2010

snail-mail-fail

hey lookit, important tax-return document in the mail... wazzat w/ the top of the envelope?



erm... umm... wot?



sighhhhh.... yea, those current number fields aren't blank... fuggin wonderful...



so there's an IRL infosec attack in motion... i'll speculate local postal carriers couldn't harvest enough numbers to make it worthwhile... maybe a USPS mail distribution worker, or someone in the mail or finance dept of Chase or whoever produces their mailers...?