HAL9000

HAL9000
"It just isn't conceivable that you can design a program strong enough to beat players like me."

March 27, 2014

Chesstroid ratings introduced

Finally releasing the first list covering 89 UCI/XB engines compatible with tournament mode of Chess for Android. I think it's the first serious attempt after Aart Bik's 2013 Tournament which without any doubt, was a remarkable effort.

The results below represent only 8% of the expected accuracy as there remain 46 opening positions to go. But i'm already satisfied with the values obtained so far. Frankly, i was expecting worse than this in the beginning.

There's no proof for calibration yet. I've just made sure Stockfish DD is reasonably behind its 3200 elo big brother running on a recent PC configuration (800 kN vs 12000 kN assumed to reflect -250 elo gap). Other engines are also checked for reasonable gap with their PC versions. And finally, low end engines are the ones i can easily defeat all the time. So, they can't be more than 1000.

Test platform:
* Samsung Galaxy Note II running @ 1.6 Ghz (Cpu governor set to performance or at least interactive. No downscaling allowed!)
* 64MB hash where selectable
* 4 cpu threads where selectable
* Own books disabled and replaced by Silver Opening Suite positions (4 of 50 played so far)
* All opening positions played twice with different colors
* Tablebases off
* Ponder off
* Time control: 5 sec/move

## Name                       Elo    +    -  Gam  Sco  Opp.  Dra
01 Stockfish DD              2970   66   62   86  77%  2782  38%
02 Stockfish 4               2888   60   58   86  65%  2790  48%
03 Stockfish 3               2881   61   60   86  63%  2790  42%
04 Stockfish 2.3.1           2868   60   58   86  60%  2792  47%
05 Critter 1.4 32-bit        2831   59   58   86  56%  2795  49%
06 Critter 1.6a 32-bit       2826   59   58   86  55%  2796  45%
07 Stockfish 2.0             2809   60   60   86  51%  2797  41%
08 Critter 1.2 32-bit        2797   72   69   84  70%  2606  27%
09 RobboLito 0.085e4l        2753   60   62   82  40%  2808  46%
10 Komodo32 2.03 JA          2747   60   61   86  48%  2758  42%
11 BlackMamba 2.0 32bit      2717   83   76   82  82%  2410  20%
12 Senpai 1.0                2687   63   64   82  38%  2765  35%
13 RobboLito 0.085g3l x86    2674   64   64   86  49%  2664  34%
14 Komodo32 3 AB             2646   66   68   86  46%  2667  22%
15 Komodo32 1.3 JA           2597   67   67   84  49%  2592  27%
16 Gaviota v1.0              2571   70   68   80  59%  2490  24%
17 Texel 1.03 32-bit         2565   70   71   84  46%  2583  18%
18 Toga II 3.0               2508   68   70   84  41%  2588  25%
19 Arasan 15.2 JA            2458   66   65   84  54%  2426  27%
20 IvanHoe 9.46b             2416   67   67   84  49%  2430  21%
21 Toga II 1.4.1SE           2383   65   65   84  45%  2433  27%
22 Arasan 14.0.1             2324   65   66   84  44%  2371  26%
23 Toga II 2.0 JA            2318   65   64   82  54%  2298  30%
24 Arasan 13.4               2315   67   65   80  62%  2237  29%
25 RedQueen 1.1.3 (TCEC) JA  2293   70   72   80  41%  2381  21%
26 DiscoCheck 3.7.1          2279   65   65   82  48%  2302  24%
27 Texel 1.01 32-bit         2264   66   67   84  41%  2347  25%
28 Gaviota v0.86             2261   66   64   80  58%  2211  29%
29 DiscoCheck 4.0.1          2256   64   64   82  49%  2273  32%
30 Crafty_23.5.JA_xb         2253   71   73   82  35%  2390  18%
31 Crafty_23.4.JA_xb         2204   65   66   80  42%  2271  29%
32 RedQueen 1.1.2            2192   67   69   82  38%  2296  21%
33 Rodent 1.00               2171   61   61   82  51%  2159  41%
34 Alfil 12.10 w32           2157   66   65   80  53%  2147  25%
35 gaviota v0.84             2151   65   63   80  63%  2061  35%
36 Rodent 0.18.0             2149   65   63   84  63%  2052  32%
37 GNU Chess 5.50-32         2145   63   64   80  40%  2223  35%
38 Rotor 0.8                 2098   65   66   80  43%  2152  29%
39 Rotor 0.7a                2088   63   63   82  49%  2094  28%
40 cheng3 1.07 JA            2086   66   66   82  50%  2087  24%
41 Daydreamer 1.75 JA        2078   65   66   82  48%  2095  22%
42 gaviota v0.83             2059   66   66   82  46%  2090  22%
43 Scorpio_2.7.JA_xb         2035   64   65   82  42%  2092  30%
44 GarboChess 3 (32-bit)     2022   67   64   84  66%  1906  25%
45 Pepito v1.59              2005   62   63   84  45%  2038  30%
46 Sloppy_0.23.JA_xb         2001   62   63   84  44%  2038  33%
47 Tucano_1.04.AB_xb         1968   64   64   84  51%  1963  24%
48 GNU Chess 6.0.2           1942   62   63   84  46%  1966  31%
49 DoubleCheck 2.7           1930   68   68   84  52%  1915  13%
50 DoubleCheck 2.6 JA        1912   65   64   84  52%  1898  23%
51 Danasah_4.88.JA_xb        1903   65   65   84  48%  1918  23%
52 DanasahZ_0.4.JA_xb        1901   65   65   80  48%  1914  29%
53 BetsabeII_1.30.JA_xb      1880   66   66   84  54%  1849  17%
54 Danasah_4.66.JA_xb        1851   70   68   80  60%  1766  23%
55 Typhoon_1.0.r358.JA_xb    1842   65   65   84  48%  1853  20%
56 Danasah_5.06.JA_xb        1838   69   67   80  66%  1713  26%
57 Greko_8.2_uci             1831   63   62   80  58%  1782  38%
58 GreKo_9.8.AB_uci          1818   64   65   84  42%  1880  27%
59 Phalanx_XXIII.JA_xb       1816   67   69   84  42%  1880  14%
60 GreKo_9.0.JA_uci          1795   63   63   84  50%  1790  29%
61 Diablo 0.5.1b JA          1773   63   63   84  47%  1792  27%
62 GreKo_10.0.JA_xb          1771   63   63   84  50%  1767  29%
63 Olithink_5.3.2.JA_xb      1750   67   67   84  53%  1725  15%
64 BetsabeII_1.22.JA_xb      1730   75   71   80  69%  1567  13%
65 Sungorus 1.4 JA           1690   68   68   84  55%  1646  21%
66 Myrddin_0.86.JA_xb        1686   70   68   84  62%  1584  14%
67 Jazz 6.40 JA  (unknown)   1683   65   67   84  41%  1738  25%
68 TJchess 1.1U              1670   65   65   84  52%  1645  26%
69 Natwarlal_0.14.JA_xb      1659   68   70   84  42%  1715  13%
70 Scidlet_2.61b2.JA_xb      1649   69   68   84  57%  1588  15%
71 DoubleCheck 2.3           1627   68   69   84  46%  1652  15%
72 KmtChess_1.21.JA_xb       1623   67   69   84  38%  1719  17%
73 Jazz v444 JA (unknown)    1552   69   69   84  52%  1534  15%
74 Jazz v5.01 JA (unknown)   1519   66   67   84  48%  1537  29%
75 Sjeng_1.12.JA_xb          1409   78   77   82  59%  1304  11%
76 BikJump v1.8 (32-bit)     1398   77   75   82  59%  1305  12%
77 ZCT-0.3.2500              1339   75   77   82  40%  1416  12%
78 AdroitChess 0.3           1333   75   78   82  40%  1417  15%
79 AdroitChess0.4 JA         1325   74   77   82  40%  1418  11%
80 Leonidas_r83.JA_xb        1315   74   72   80  65%  1166  23%
81 BikJump v2.1P (32-bit)    1291   75   79   82  35%  1424  13%
82 Sjaak_4.68.JA_xb          1279   81   77   78  69%  1089   8%
83 Tscp_1.8.1.AB_xb          1171   75   74   78  56%  1100  13%
84 Zzzzzz_3.5.1.JA_xb        1131   72   72   78  51%  1104  21%
85 Rocinante 2.0 JA          1097   74   75   78  49%  1108  13%
86 VIRUTOR CHESS 1.1.1       1004   76   80   78  36%  1117  13%
87 VIRUTOR CHESS 1.1.4        991   76   80   78  35%  1119  15%
88 Chess for Android          859   84   93   78  21%  1132   6%
89 Simplex 0.9.8              655  115   40   78   6%  1153   3%

March 25, 2014

How deep can brute force dive?

Programs playing at over 3000 elo is not rare at all today. Although the progress is mainly based on much smarter evaluation alghoritms, especially positional vs material, there's a little bit of help from stronger processors with multi cores and lots of threads too. We've come to a time we talk about 8, 16 cores and up to 64 threads on a single machine.

Without any good reason, i just can't stop myself from fearing the day when machines will completely solve the chess puzzle from the beginning move. Are they close to calculate everything?

Hopefully not yet. Even though the strongest PC processor today is far behind Deep Blue 1997's power, many free engines manage to play much stronger than it thanks to better evaluation algorithms.

What is the limit then, if they would be forced to calculate every possibility, in other words if they would use pure brute force instead of selective search, giving up elliminating non sense variations?

Let's assume an average of 3 minutes thinking time per move which represents tournament time controls and an average of 30 possible legal moves in each half move (ply) to calculate how far machines can look ahead:

Deep Blue @ 200 million nodes per second:
Max analysis depth: log (200 Mnodes * 180 sec) / log (30) = 7,15 plies!

Stockfish @ 25 million nodes per second:
Max analysis depth: log (25 Mnodes * 180 sec) / log (30) = 6,54 plies!

Obviously still not enough. In both cases machines can't even look 4 full moves ahead!

Now, i calculate how much time would Deep Blue need to calculate all possible variations up to 25 plies ahead, remembering that strongest recent engines reach this depth easily on a modern pc today:

(30^25) / 200Mn / 3600 / 24 / 365 = 1.34 * 10 ^ 21 years

Hmm. Too long for an opponent to wait.

This time let's be more reasonable with 20 plies depth and 20 possible moves per ply:
(20^20) / 200Mn / 3600 / 24 / 365 = 1.66 * 10 ^ 10 years

At least the second result can be sizeable by human mind: 16.6 billion years. The conclusion is that we must still postpone the brute force projects.

Machines don't see everything yet and we won't live enough to witness that.

March 21, 2014

SSDF rating list: Accurate but too slow



This list is the last list of SSDF and it's from November 2013. Obviously, tournament time control and hardware diversification lead to better accuracy. However, the pay off is not negligeable: A lot of new programs and versions are missing because of the time required for new measurements.

#  Program name                           Ratg  + -  Game Won Opp
1  Komodo 5.1 MP x64 2GB Q6600 2,4 GHz    3241 56-51  181 65% 3132
2  Stockfish 3 MP x64 2GB Q6600 2,4 GHz   3211 27-25  838 72% 3051
3  Deep Rybka 4 x64 2GB Q6600 2,4 GHz     3208 26-24  966 74% 3028
4  Deep Hiarcs 14 2GB Q6600 2,4 GHz       3205 25-24  928 69% 3064
5  Deep Rybka 3 x64 2GB Q6600 2,4 GHz     3197 23-21 1331 76% 2995
6  Naum 4.2 MP x64 2GB Q6600 2,4 GHz      3150 23-22  999 62% 3062
7  Hiarcs 14 256MB Athlon 1200 MHz        3134 88-65  160 84% 2845
8  Naum 4 x64 2GB Q6600 2,4 GHz           3123 22-21 1156 69% 2981
9  Deep Junior 13.3 2GB x64 Q6600 2,4 GHz 3117 24-23  842 56% 3074
10 Deep Shredder 12 x64 2GB Q6600 2,4 GHz 3107 19-19 1421 64% 3006
11 Spike 1.4 MP 2GB Q6600 2,4 GHz         3106 20-19 1309 62% 3019
12 Deep Hiarcs 13.2 2GB Q6600 2,4 GHz     3103 28-27  632 60% 3031
13 Hiarcs 13.1 2GB Q6600 2,4 GHz          3101 24-24  828 58% 3044
14 Deep Fritz 13 2GB Q6600 2,4 GHz        3100 26-26  706 57% 3049
15 Deep Fritz 12 2GB Q6600 2,4 GHz        3091 21-21 1080 55% 3053
16 Deep Rybka 3 256MB Athlon 1200 MHz     3074 39-37  332 58% 3018
17 Deep Junior 12 x64 2GB Q6600 2,4 GHz   3071 22-21 1058 60% 2997
18 Deep Fritz 11 2GB Q6600 2,4 GHz        3063 19-18 1464 62% 2975
19 Zappa Mexico II x64 2GB Q6600 2,4 GHz  3054 25-25  776 59% 2989
20 Naum 3.1 x64 2GB Q6600 2,4 GHz         3040 27-26  692 58% 2986

A method to rate Android chess programs

An accurate rating list is not easy at all to establish. It would be, if chess programs were eggs and we would just grab two of them and just bump! And the winner is the one which stands still in one piece. Though, the loser can't be rated again.

A study of various methods points out nothing but constraints. We must find an optimum balance each time and this balance must remain valid long enough:

1) More samples for more accuracy need more time. Spread money on more host devices or wait years for accuracy?

2) Quick samples are possible but they lose precision. Sacrifice number of samples or sample quality vs time?

3) Programs are updated all the time before we finish rating a version. Diversify and measure again or stack all versions like human players?

4) Hardware capacities increase all the time. Spread money to diversify hosting devices and remeasure or stay outdated on the same basis?

Really hard to choose... There's no concrete correct answer but we may have specific options which suit better existing ressources.

That's why different people have defined and will define different rules for their studies.

Mines are the following and i hope they will remain unchanged for a long time:

* Platform: Any platform with few or outdated data (mainly Android, retro machines, old legends and everthing out-of-the-standard) is welcome. Winboard and UCI engines are clearly out of scope as many people handle them very well already. I may only need some UCI engines as reference for calibration of the list.

* Time control: I'm all alone with one device and limited finance. Thus, tournament timings are beyond the capacity. Bullet modes such as 1/1 or 1'+1" are quick and attractive but fast time controls are always tricky as any deviation of time usage may degrade the result. Loss of accuracy is of untolerable level, so i go for something in between which is 5 seconds/move. This setting is present in almost every program or machine since computer chess began.

* Opening book: No own book, unless it's impossible to disable. I prefer a car race with standard tyres only and there are plenty of trusted opening suites available.

* Pondering: No. I do not and can not use automated play between two devices connected. It's "a tester's dream" by the way. SSDF must be mentioned here as they connect two pc's for two programs.

* Playing mode: Double round robin tournaments with both sides of the same opening played by programs of comparable strength. The opening is unique in each tournament. After all divisions finish, one round is completed. Maximum 1/3 of players on top of the division go one division up and last 1/3 go down. This exchange between divisions ensure optimum linking between divisions and minimizes segmentation effects on the list. Therefore, ratings are updated before the next round starts with a different opening from the suite. Any device or program without tournament automation is subject to manual play against selected opponents. This ones will hurt a lot indeed.

* Tablebases: Disabled because many engines using tablebases on pc versions can not use them under Android. "Same for all" rule looks fair here. Another constraint is limited system ressources of the Android environment. Tablebases are potential ressource consumers as they require huge storage area.

* Hash tables: Yes. 64MB for Android. Same for others whenever possible and effectively used.

* Program updates: Not requested, not welcomed at all by a tester. But unfortunately there will be some :-) Each new version is a new player in the arena and repeats all past rounds to join others. In each round, the new version plays with all participants of the best matching division. This method remain accurate but generates important interruptions.

* Hardware updates: Future devices will allow stronger play indeed. If i was to rate only Android programs, it would be possible to assume added playing power is the same for all programs and i could keep all previous input. But once any chess computer or non Android app is included in the list i must diversify the hardware and repeat the whole story like SSDF does. Pay off is very time consuming in that option. My decision is simply to wait and see given that my current device, Samsung Galaxy Note 2 is not yet outdated.

* Graphical user interface: Aart Bik's Chess for Android. Although it still lacks a lot of features, that's the only one which allows automated engine tournaments. What Arena is under Windows, for me Chess for Android is the same under Android.

March 20, 2014

Chesstroid chess rating list, soon...

Several rating lists showing the strength level of chess programs are available since more than 20 years.

Chess enthousiasts were always interested to discover how strong chess programs are playing and what is their progress in time towards "machines playing better than humans". Thus, this insane testing hobby did never stop even after a machine defeated the human world chess champion in 1997.

One of the oldest groups of testers is Sweedish SSDF, the first organization to issue computer ratings in 80's. They still continue the same way today and publish two updates in a year.

I like SSDF because:
* It's a multiplatform list including chess computers, PC programs and also mobile apps.
* They are unique in implementing tournament time controls, not blitz or bullet. Very time consuming and only possible by using a network of test machines.

Other dedicated testing networks, such as CCRL, provide regular updates and precise results too, but their scope is mainly focused to test Winboard and UCI compatible chess engines only. Even though this portion represent the strongest programs available, one can not find any information about how strong plays a given free iphone chess program in CCRL. Only Winboard/UCI engines which require a GUI to work, in other words, not any standalone program or device you may wish to compare.

Having seen very few info about the playing level of Android chess apps and engines can be found, i've been thinking it's time to do something serious and recover missing data by someone interested in AI development, statistics, programming techniques and hardware technologies.

Who would be crazy enough to spend a lot of spare time and allocate some hardware to generate and maintain a different kind of rating list, focused on mobile programs and engines?

Then i looked in the mirror. I looked like a lil' bit crazy that day.

Then there comes the Chesstroid chess ranking very soon, after 3 months of initial study and preparation.

March 18, 2014

New UCI engine for Android: Senpai 1.0 by Fabien Letouzey


Senpai is a brand new engine from the developer of famous Fruit and Toga engines. Good news is that an Android version is not forgotten. And this made me happy.

After very quick initial tests, the engine is qualified for engine to engine tournaments under Chess for Android.

Importing into gui: OK
Tournament mode: OK
Time control 1/1": OK
Time control 1/5": OK
Resigns: No
Own book: No
Tablebase support: No
Multicore support: Yes

I expect it will be rated around 2650-2700 on Galaxy Note 2. Test games to follow in order to confirm.

Download: http://www.chessprogramming.net/senpai

Another engine Fruit Reloaded can also be found on the same link above without Android compile.

March 14, 2014

In pursuit behind Carlsen


Just a couple of weeks after Zurich SuperGM tournament won by world champ and number #1 Magnus Carlsen (22, ELO 2881), eight crown-hungry followers including Anand who recently lost the crown to Carlsen, as well as determined Fide #2 player Aronian meet at the candidates tournament in Khanty-Mansiysk, Russia, from 13 March to 31 March 2014

The winner will face Carlsen for World Championship match by the end of this year.

Official site: http://candidates2014.fide.com

March 13, 2014

Bxh2+: A terrifying computer move i can't forget


The 10th move from Junior was the moment i said: "Oh no! What's going on there?"

11 years ago. The times computer programs started to be very annoying :-)

Kasparov,G (2847) - Deep Junior [E48]

X3D FIDE Man-Machine World Championship New York City, 05.02.2003

1.d4 Nf6 No more Semi-Slav, which was played in games one and three. Junior was in serious trouble in both games. Here the Junior team goes for the solid Nimzo-Indian Defense.

2.c4 e6 3.Nc3 Bb4 4.e3 0-0 5.Bd3 d5 6.cxd5 exd5 7.Nge2 Re8 8.0-0 Bd6 9.a3 c6 [ 9...Nbd7 10.b4 c6 11.Ra2 a5 12.bxa5 Rxa5 13.f3 c5 0-1 Pliester,L-Lobron,E/Amsterdam 1995/EXT 97 (35). (35); 9...c6 ]

10.Qc2 Bxh2+! !!!!!!!!! Unbelievable! Junior sacrifices a piece for an attack on Kasparov's king! A total shocker. Kasparov was visibly stunned on the big X3D screen. He calmed down after he was sure he wasn't being mated. The attack is not a forced line, it is very speculative.

11.Kxh2 Ng4+ 12.Kg3 [ 12.Kg1?? Qh4-+ 13.Rd1 Qxf2+ 14.Kh1 Re6 ]

12...Qg5 13.f4 Qh5 14.Bd2 Commentators Seirawan and Ashley didn't like this move. But it connects the rooks and white isn't afraid of 14...Rxe3 15.Bxe3 Nxe3 16.Qd2. [ 14.e4!? ]

14...Qh2+ 15.Kf3 Qh4 16.Bxh7+ After over 30 minutes of thought Kasparov heads to one of the drawing variations.
[ Kasparov could have attempted to play for a win with the incredibly risky 16.g3 Analysis shows it is probably playable, but Kasparov thought it just too dangerous against a program like Deep Junior. 16...Nh2+

( 16...Qh5 Yet another wild line that looks good for White. 17.Rh1 Nxe3+ 18.Rxh5 Bg4+ 19.Kf2 Nxc2 20.Rah1 Bxh5 21.Rxh5 Na1 22.Bxh7+ Kf8+- ;

16...Qh2 According to his postgame comments, this was the move Kasparov was most worried about during his long think before taking the draw. 17.f5 Nd7 ( 17...h5 18.e4 ; 17...Qh3 ) 18.Kxg4 Qg2 19.e4 Nf6+ 20.Kf4 dxe4 21.Bxe4 Nxe4 22.Nxe4 Qxe2 23.Rae1 Rxe4+ 24.Qxe4 Qxd2+ 25.Kf3+- )

17.Kf2 Ng4+ 18.Ke1 White just walks away. (RUNS away!) These lines are still dangerous for White, although no concrete win has been found for either side. Black keeps good attacking chances for the piece and it is definitely harder for a human on the white side!

( 18.Kg2 Qh2+ 19.Kf3 g6 ( 19...f5 20.Bxf5 Qh5 ) 20.e4 ( 20.f5 Nd7! Threatening mate in three with ..Nde5+! 21.e4 c5! The saving shot for Black, removing the key d4 pawn. 22.Kxg4 ( 22.Bg5 gxf5 23.Nxd5 cxd4 24.exf5 ) 22...cxd4 23.Bg5 forced ( 23.Rh1 Ne5+ 24.Kf4 h6!! Forcing mate! 25.Nxd4 ( 25.Rxh2 g5# ) 25...Qf2+ 26.Nf3 Qxf3# ) 23...dxe4 24.Nxe4 Qh5+ 25.Kf4 gxf5 26.Nxd4 Qg4+ 27.Ke3 Qxg5+ 28.Kf2 fxe4 29.Bxe4 Nf6=/+ ) 20...dxe4+ 21.Bxe4 Nf6 22.f5 Qh5+ 23.Kf2 Nxe4+ 24.Nxe4 Qxf5+ 25.Ke3 Qg5+ 26.Kd3 ( 26.Kf2 Qf5+ 27.Ke3 Qg5+ 28.Kf2= ) 26...Bf5 27.N2c3 Qxg3+ 28.Be3 Black has three pawns for the piece and maintains a dangerous attack.)

18...Qh3 ( 18...Qh5 ) 19.Rg1 Nd7 20.e4 dxe4 ( 20...Nh2 ) 21.Nxe4 Ndf6 ( 21...Qh2 22.Kd1 Ndf6 23.Nxf6+ Nxf6 24.Re1 Bg4 ) 22.Nd6 ( 22.Nxf6+ Nxf6 23.Bc3 Bg4 24.Kd2 Bxe2 25.Bxe2 Ne4+ ) 22...Re6 23.Nc4 ; 16.Ng3 Nh2+ 17.Kf2 Ng4+ 18.Kf3 Nh2+= ]

16...Kh8 17.Ng3 Nh2+ 18.Kf2 Ng4+ 19.Kf3 Nh2+ 1/2-1/2


Replay this extraordinary match at:
http://en.chessbase.com/portals/4/files/games/2003/x3d5.htm

March 12, 2014

Ten of the greatest chess games of all time

An old article from Daily Mail lists a nice compilation of most exciting chess games in history. Kasparov's horror act vs Deep Blue is on top.
Click to read

Android UCI engine update: Gaviota 1.0

Gaviota 1.0 is released by the developer Miguel A.Ballicora.

I'm happy to see an Android version is also included. Therefore i've tested it with Chess for Android GUI and everything works fine on Galaxy Note II.

I don't have any strength test to share yet but after a few games, my feeling is that this version compared to previous 0.86 release, is way stronger.

Gaviota homepage: https://sites.google.com/site/gaviotachessengine
Chess for Android (apk): http://www.aartbik.com/MISC/DATA/NetworkChess.apk

March 11, 2014

Strongest Android chess engine?

***DISCLAIMER***
DEAR CHESS FAN,
IF YOU ARE VISITING THIS PAGE AFTER A GOOGLE SEARCH, PLEASE NOTE THAT THE POST BELOW IS QUITE OLD AND DOES NOT CONTAIN RECENT INFORMATION. IT CAN EVEN MISLEAD THE READER.
EVEN THOUGH I POST RECENT INFOS UNDER THE SAME TITLE, UNFORTUNATELY IT DOESN'T MAKE SENSE FOR GOOGLE SEARCH ENGINE.
I STRONGLY ADVISE THE VISITORS TO BOOKMARK THE BLOG AND/OR SUBSCRIBE TO ITS UPDATES IN ORDER TO REACH FOR THE MOST RECENT TOPICS.
THIS BLOG IS A LIVING ONE AND IT PROVIDES REGULAR UPDATES WITHOUT ADS OR FEES: JUST CHECK THE MAIN PAGE AT: CHESSTROID.BLOGSPOT.COM
***END OF DISCLAIMER***

What is the strongest chess program available on Android? This is a common question of many chess fans around. The answer as of today is somehow clear: Stockfish DD.

Stockfish is an open source free engine dominating all present rankings based on Windows  environment. Android is not different here. Besides, Stockfish is obviously all alone at the top thanks to Houdini, Rybka, Naum etc still not available for Android.

Android version of Stockfish plays approximately at 2900 ELO on a Galaxy Note 2. The closest rival is Critter 1.6a. Unfortunately Critter is not developed anymore and the gap is estimated as 100 ELO below.

I don't have an example game which Stockfish lost to any other engine yet. Quest to be continued.

To download and try Stockfish on Android, go to: www.stockfishchess.org

Those who don't have a chess GUI to plug in the binary compile, there's a standalone application called Droidfish in Googleplay with the same engine embedded.

Droidfish can also be found in above homepage under third party apps section.

TCEC Season 6

Season-6 started in February with 36 chess engines fighting on an extremely powerful PC configuration.

The engines use up to 16 cpu cores and all games are played with 120 minutes + 30 seconds per move time control which is the equivalent of FIDE tournament specification.

 Yes, it's simply today's highest computer chess playing level achievable on a PC.



TCEC Live

 More info about TCEC can be found at: http://en.wikipedia.org/wiki/Thoresen_Chess_Engines_Competition

March 9, 2014

Chess under Android? Yes...

I'm not a serious chess player but always enjoyed following what's happening in the world of chess. After a lot of experience in search of strong chess programs for Android, i finally decided to share the information i've got via blogs.
The aim is to take part in filling the missing parts for others, especially those who use Android mobile devices who may have not noticed they can carry a 3000 elo chess grandmaster in their pocket.

Ready to go!

Day one of the diary of a chess addicted mind.