Direkt zum Hauptbereich

Posts

Es werden Posts vom Januar, 2015 angezeigt.

#I-Hegel: Why there will never be artificial inteligence (#AI)

... but perhaps something more powerfull. Lately, I have been thinking about AI a lot. Right now, I am readig Hegel again, and I am trying to do it seriously (sorry guys, I do not know if this could be done in English...). What strikes me is this: There is no intelligence. Intelligence is a force ("Kraft") that is said to lead to different expressions ("Äußerung"). Think about a guy who plays chess on a grandmaster's level and is able to solve any mathematical equation with ease. He is very intelligent, isn't he? The problem is that we do not know anything about the force exept for the expressions. We can not find intelligence in anyone without him expressing something we declare to be intelligent. Therefore, the expression and the force cannot be differentiated in reality. Someone does something we label intelligent and therefore we say that the mystique force of intelligence is somewhere in him or her. Hegel tells us that this is a wrong judgement: T

Handling #bigdata in R (2): Cuda and rpud

Taken into acount, how long it took to just load the data from the kaggle competition , working with it is kinda scarry. Therefore, I gonna speed up my system by using GPU computing in R. Modern graphic cards are very powerful and can - in principle - work like a highperformance cluster. Following this brilliant tutorial by Chi Yau, I managed to get Cuda running on my Ubuntu machine. It took some time, especially the "make" command at the end. But, as you will see shortly, it is absolutely worth the affort. The second part of the tutorial shows how to install rpud (don't try to install it directly from R-Studio, but follow these steps.) Again, it took some time, especially until I realized that the type="source" parameter has to be added when installing the packadge. Finaly, everything is working and I followed Chi Yau's example and calculated a distant matrix for datasets with huge number of vectors. The gain in speed is unbelivable! And here is

Handling #bigdata in R (1): data.table

How big is big? Are you fit for real big data? These are some questions I am thinking about. Luckily, there is a kaggle competition going on, with the aim to predict the click through rate in a huge dataset of webpage visits. The task is to predict the probability that 4.5 Million users are clicking on an advertisment. The training dataset contains of 40 Million (!!!) lines of user data. In this little series I will share my experience in trying to handle this mess. First problem: How to load the data? Loading the csv file in a normal way takes much too long. The package "data.table" includes the fread function, which is much faster. By setting colClasses to "character" all columns are loaded as character class library ( data.table ) train <- fread ( "train.csv" , colClasses= "character" )   Reading nearly 6 GB will take some time, nevertheless...

#Phreaking is back - on android!

Phreaking or phone freaking has been the begining of the hacking culture. "The term first referred to groups who had reverse engineered the system of tones used to route long-distance calls. By re-creating these tones, phreaks could switch calls from the phone handset, allowing free calls to be made around the world." ( Wikipedia ) OK, you cannot phone for free by whisteling in your android device. But try this: Type *#*#4636#*#* in your smartphone and you find a secret menu. There are many other secret codes, including functions to reset the entire device in fabrique state. Even more spooky: Some webpages simulate the dialing of these numbers. So surfing the web with your smartphone may cause seriouse trouble.   Taken from Wizzywig .