HAL9000

HAL9000
"It just isn't conceivable that you can design a program strong enough to beat players like me."
Showing posts with label Events. Show all posts
Showing posts with label Events. Show all posts

January 12, 2018

TCEC Season-10: Who's the winner, really?

The 10th season of TCEC may have been the most interesting of all seasons. The three muskeeters met once again at the top and equally shared three available crowns. I'm not interested in constructing a spectacular article about which one is stronger. I just think they are very close to each other and generating an obvious difference seems very difficult between them, even with such extreme hardware used by TCEC organisers.
We've seen Houdini reaching 70 MNps in some starting positions and Stockfish floating around 55. It was impressive to see 43 threads running together. All that's left to do is to thank all TCEC team for the great show they delivered to chess lovers.

7-December-2017: Houdini 6.02 wins TCEC Tournament
READ MORE AT CHESSDOM

25-December-2017: Stockfish wins TCEC Rapid
READ MORE AT CHESSDOM

30-December-2017: Komodo wins TCEC Blitz
READ MORE AT CHESSDOM

By the way, you should not miss the great article published by Chessbase which included comments by the authors of Houdini and Komodo. A must read for all:
HOUDINI WINS THE TCEC SUPERFINAL

All games and crosstables available in the TCEC ARCHIVE BY CHESSDOM



October 15, 2017

TCEC-10 is playing with Stockfish dev, Houdini 6 and Fire 6.1!

TCEC Season-10 could finally start this week after a lot of speculations (usually, as always) and some delay.

No matter what, TCEC is still the most interesting computer chess competition and the organisers deserve some appreciation for the efforts,
This time the hardware is a rental monster using high end components like:
-Dual Xeon 2699 v4 – total 44 cores
- Supermicro X10DRL-i
- 64 GB RAM meaning max of 16 GB hash per engine
- 250 GB Crucial SSD
- Windows Server 2012 R2

For Stage 1 the server will be limited to 22 cores, from Stage 2 and for the Superfinal the event will run on the full power of the 44 cores.

TCEC Season 10 participants are:

Andscacs
Arasan
Bobcat
Booot
Chiron
Fire
Fizbo
Fruit
Gaviota
Ginkgo
Gull
Hakkapeliitta
Hannibal
Houdini
Jonny
Komodo
Laser
Nemorino
Nirvana
Rybka
Stockfish
Texel
Vajolet2
Wasp

The stage structure is like the following:

TCEC Season 10 will consist of a preliminary stage, a qualifiers stage, and a Superfinal. Each will have different time controls and structure

Stage 1: this is the preliminary stage, involving 24 engines, playing a single round robin (276 games). The time control of the games will be 60 mins + 10 sec/move. The top 8 engines qualify for the next stage

Stage 2: the second stage will be qualifier of 8 engines with 2x double round robin (112 games) with a slightly longer time control 90 min + 10 sec/move

Superfinal: two engines will participate in the Superfinal, playing a total of 100 games for the title of Grand Champion of TCEC

You can watch TCEC games live: HERE

August 13, 2017

WCCC-2017 Leiden: Komodo wins in absence of Stockfish

IGCA's long criticized old championship has been held in Leiden once again with an accelerated tournament format.

Most strongest engines like Stockfish, Houdini,Gull were not participating, thus it was easier for Komodo to win a second time after 2016, though the victory didn't come as easy as expected against Jonny and Shredder, the two serious competitors. Play-offs were played to select the champion.

WCCC is nothing but a small sized tournament to me after all. Although there are interesting games, it's difficult to generalize the result. The only thing i appreciate here is that people involved in computer chess programming meet each other, so there's a social benefit, not just bits and bytes.

SUMMARY:

* Date: Monday 3 July until Friday 7 July 2017
* Location: Leiden at the Leiden University, Snellius Building, Turing Room / Netherlands
* Schedule: The event began at 10:00 on Monday, July 3rd. Tuesday through Friday it began at 9 am.

Results:

Round 1 - 7/3/2017 10:00
1 Jonny - Komodo 1/2
2 Chiron - Shredder 1/2

Round 2 - 7/3/2017 15:00
1 Komodo - Shredder 1-0
2 Jonny - Chiron 1/2

Round 3 - 7/4/2017 09:00
1 Chiron - Komodo 1/2
2 Shredder - Jonny 1/2

Round 4 - 7/4/2017 14:00
1 Komodo - Jonny 1/2
2 Shredder - Chiron 1/2

Round 5 - 7/5/2017 09:00
1 Shredder - Komodo 1/2
2 Chiron - Jonny 1/2

Round 6 - 7/5/2017 14:00
1 Komodo - Chiron 1/2
2 Jonny - Shredder 1/2

Round 7 - 7/6/2017 09:00
1 Jonny - Komodo 1/2
2 Chiron - Shredder 1/2

Round 8 - 7/6/2017 14:00
1 Komodo - Shredder 1/2
2 Jonny - Chiron 1-0

Round 9 - 7/7/2017 09:00
1 Chiron - Komodo 1/2
2 Shredder - Jonny 1/2

Play-offs:

First set, two games with 45m+15s time control:
Komodo - Jonny 1/2 Jonny - Komodo 1/2
Chiron - Shredder 1/2 Shredder - Chiron 1/2

Second set, two games with 20m+15s time control:
Jonny - Komodo 1/2 Komodo - Jonny 1/2
Chiron - Shredder 0-1 Shredder - Chiron 1/2

Third set, two games with 10m+10s time control:
Jonny - Komodo 1/2 Komodo - Jonny 1-0

Final standings:
1. Komodo
2. Jonny
3. Shredder
4. Chiron
Erdogan Gunes receives the trophy

Hardware:
Shredder: 32 cores, Intel XEON E5-2697A v4, 256 GB internal memory
Jonny: 2400 cores, AMD Opteron 2.7 GHz, 64GB internal memory per node
Chiron: 32 cores, 2 Opteron 6274 2.2 GHz, 32GB internal memory
Komodo: 60 cores, Intel Xeon E7-8890 v2 2.8 GHz (Lenovo server)

Books
Shredder: private opening book, 6-men Syzygy endgame
Jonny: private opening book, 5-men Nalimov endgame table bases.
Chiron: Converted Cerebellum (into polyglot format) opening book, 6-men Syzygy endgame table bases.
Komodo: opening book by Erdogan Gunes, 6-men Syzygy endgame table bases.

All games played in PGN format: HERE.
The final report by the organisers: HERE.

May 4, 2016

TCEC-9 is on

TCEC Season-9 has just started on May,1st.

If you have missed the first days of the event, there's nothing to worry because the first weeks will feature qualifications where there will be no serious fights like Komodo vs Stockfish.

Given that biggest threats of three years ago like Houdini, Gull and Critter are not maintained by their authors, there's no improvement on the side of the rivals. So the show is expected to conclude with a superfinal between the fish and the reptile once again.

Nevertheless, TCEC remains the only competition played with extreme hardware at tournament time controls. This is simply the highest level of computer chess you can watch as of today!

You can watch TCEC games live: HERE

9 reasons for Kasparov to restart playing chess

Deep Blue stopped playing chess just after crushing Garry Kasparov. It had been dismantled decades ago.

Garry Kasparov stopped playing chess 8 years after losing to Deep Blue. He was not dismantled, he only focused his brain on other tasks.

After a long break, a blitz event organised as an extension of US Championship brought together the legendary WC of 80s and 90s and three leading US players, Caruana, Nakamura and So.

No doubt that it was a clever example of show business to increase public interest to chess in US considering that how strong GK can resist creates a magic dilemma about the outcome of such event.
In the end, GK not only showed a brilliant performance against three players from World's Top-10 ranking but he also demonstrated tactical players keep their playing level high much longer than positional players.

Now if GK decides to return to active chess playing, i'm sure that he will not regret. Even though he could not win that blitz tournament, all his games were attractive and addictive.

Just imagine what more he could deliver with more preparation...

Read more on CHESSBASE

April 27, 2016

Kasparov is back for some blitz

It's got nothing to do with computer chess except for a never-to-forget match he'd played against Deep Blue in 1997. Anyway, this news is something to be aware of for all chess lovers.

He retired on top and choosed to speak with statements rather than killer chess moves, so that no one could forget about him. I'm one of those who think he was way more succesful on the board while thinking about a miracle move rather than speaking around.

After a campaign for FIDE presidentship which failed at the end, he recently appeared for a blitz match vs his old rival and his later mate Nigel Short. He played much more in style beyond expectations and won 8.5 to 1.5!
Now GK comes back for a more serious encounter against three strongest GMs of the US. Can he win this "human vs younger human" challenge?

To be honest, it's hard to guess because one can easily find out how unpredictable he could be during his career. I must hereby admit that this unpredictability mostly refers to unbelievable successes he achived many times during his career.

Read more about it at Chessbase and the official tournament site.

March 1, 2016

Stockfish 7 Revisited: A clear tie between 3 builds

Huh and hah!

That was the first time i was digging that deep to discover a gap which possibly does not exist at all. Hopefully i was able to keep myself from pushing forward that insane experiment toward infinity. There's no gold in this pit man!

Given that i'm not wired enough to compete with Stockfish testing network, 2448 games per engine should be more than enough for me and my modest tablet fleet. All 306 openings of TCEC-7 were played with both sides between all participating engines.

The chart shown below summarizes the result of 6120 games played by 3 Stockfish 7 builds, Stockfish 6 and Komodo 9.3.

There is no clear reason to replace the build compiled by Peter Österlund, released with Droidfish 1.60 & 1.61 with any of the builds which came from Jim Ablett. They have proven their quality by playing exactly at the same level but they are not stronger at 300+1 time control at least, on a RK3188 processor running at 4 cores x 1 GHz.

Program                  Elo    +  -  Gams  Score   Oppo   Draws

1 Stockfish 7 DF160    : 3368   8  8  2448  56.9 %  3320   66.8 %
2 Stockfish 7 Beta2 JA : 3364   8  8  2448  56.2 %  3321   66.6 %
3 Stockfish 7 JA       : 3364   8  8  2448  56.2 %  3321   66.7 %
4 Stockfish 6          : 3290   9  9  2448  43.1 %  3339   61.2 %
5 Komodo 9.3 32-bit    : 3259  10 10  2448  37.6 %  3347   48.6 %

Once again, Komodo 9.3 suffered badly and took the last place, even behind Stockfish 6. One should remember that even Komodo 9.2 used to perform ~40 ELO ahead of SF6. What an unexpected regression...

In short, i will keep Stockfish7.DF160.arm7 in Rapidroid and won't care about the lack of an official build of Stockfish 7. The latter will never be released as confirmed in written by Daylen Lang of Stockfish helpdesk. Thus, the Droidfish build is the closest to deserve "official" label since the main app is mentioned in the official site.

Screenshots of 5 stages of the huge round robin based on lot of 50 openings each (56 for the last one to reach 306 total openings):

TCEC-7: Openings 1 to 50 of 306

TCEC-7: Openings 51 to 100 of 306

TCEC-7: Openings 101 to 150 of 306

TCEC-7: Openings 151 to 200 of 306

TCEC-7: Openings 201 to 250 of 306

TCEC-7: Openings 251 to 306 of 306

In case you wanna check how low was the quality of the games compared to Rapidroid, the games in PGN compressed with 7z  (7.8 MB) can be downloaded: HERE

P.S.: The latest development version i've posted yesterday was not available during above tourney. Although i'm not sure i can bear another deep experiment to find out how stronger is the develeopment verison, i may come back to the same subject later on because i'm rather interested to see how Komodo 9.3 with contempt 0 will deal with Stockfish 7 and both questions can be answered with a mixed double gauntlet extension.

January 31, 2016

Stockfish 7 Revisited: No superiority after first round

As mentioned in a previous post, i'm currently running a relatively longer tournament between different Stockfish 7 builds in order to find out, if there's such difference, which build is better.

I must admit i won't be surprised if thousands of games reveal nothing precise. Maybe there's nothing to find.

After 1000 games played, there's no clear sign of superiority between 3 builds of Stockfish 7. Even the gap between Komodo 9.3 and Stockfish 6 doesn't tell too much.

That needs more and more games. Therefore, there will be more and more games!

The standings after playing first 50 openings of TCEC-7:

1) Stockfish 7 JA        230.0 / 400
2) Stockfish 7 DF1.60    220.0 / 400
3) Stockfish 7b2 JA      218.5 / 400
4) Stockfish 6           170.5 / 400
5) Komodo 9.3            161.0 / 400

January 24, 2016

Stockfish 7 Revisited: Another battle of the Android builds

We still don't know which build of Stockfish 7 is the official one. You will take a look at stockfishchess.org/downloads and notice that "Stockfish 6 for Android" button is still there!

To make sure Droidfish 1.60 delivers the best build so far, a new tournament may be a good idea.

The engines to play are:
1) Stockfish 7 DF160
2) Stockfish 7 JA
3) Stockfish 7 beta 2 JA
4) Komodo 9.3
5) Stockfish 6 official

Komodo is here to reduce self play effects and Stockfish 6 is the reference which will scale the overall progress.

Since previous Stockfish beta and development builds are not stronger according to last closed tourney, i'm not interested in these any longer.

The games are to be played on RK3188 quad core downclocked to 1 Ghz using a little bit longer 300+1 time controls and TCEC-7 openings by lots of 50 positions each. This makes 1000 games by round, to reach 400 games per engine per round. As i can allocate one of RK3188 tablets continously for the experiment, there's no problem for concluding it.

The tourney has started today with Komodo vs Stockfish 6. I see 17.5 to 13.5 so far after 31 games played...

Fritz 14 warms up against Ivanhoe 9.47c beta and Toga Returns 1.1

I wish i had more reference devices with Exynos 4412 so that i could introduce new versions quicker in Rapidroid.

Although i'm very curious about the level of Fritz, my hands are tied because of Stockfish 7, ExChess 7.88 and RedQueen 1.1.98 which must finish replaying all games of past 17 rounds, shortly said 300+ games for each updated engine. They are all multi threaded and this means both Exynos devices must remain busy. Grrr!

While waiting, i've conducted a test run on Rockchip 3188 between 3 remarkable candidate engines to get a basic idea of their strengths. Unfortunately, Gull 3 is not usable on this processor. It definitely requires Exynos and could not take part.
Test conditions:
* Rockchip 3188 processor downclocked to 1.0 Ghz
* Last 50 positions of TCEC-7 including superfinal openings
* Time control of 180+1

The cost of progressing quickly with more samples is degrading the chess quality a lot, such as:
* Faster TC: 180 / 900 = 5.0 times
* Downclock: 1.0 / 1.6 = 1.6 times
Overall: 5 x 1.6 = 8 times less nodes per game.

But i don't mind too much. The results favored Fritz over Ivanhoe Beta 9.47c and Ivanhoe over Toga Returns 1.1 clearly. I guess there's no hurry about Toga Returns. It can wait for a while for Rapidroid.
The summary with speculative Elostat ratings based on the estimated average of 2950 ELO:
The games can be downloaded: HERE

January 19, 2016

Gull 3 for Android: Rev 5 JA qualifies for Rapidroid

Updated versions of the engines already ported to Android is not enough for us, ELO gourmets, hungry for engine battles. That's why it's extremely exciting to see Gull 3 knocking on the door toward cellphones and tablets arena.

It's to remember that Jim Ablett's first attempts to compile it had failed and he had given up. Recently after a few months, the revenge is done, although it was not easy.

We have seen five builds as per user feedback based firstly on nodes per second display (still missing) and Jim has added Syzygy support on the go with rev3, something i personally don't need for Rapidroid. I skipped the second build which i consider not very significant. 

Assuming that a sixth build will not come soon, i've conducted a closed tournament between the builds i've stored just to see which one performs better, without caring about the functionality of the features. I've simply said "Be the strongest survive!".

Using the first 10 openings of TCEC-7 and a quick time control of 180+1, the tournament didn't take too much time. I guess the result is representative for 900+2 time control of Rapoidroid, though i can never be totally sure. However, time matters...
It's hard to distinguish the revisions as the names don't give enough details, so here's the brief summary:
1) Gull 3 rev1 42.5 / 60
2) Gull 3 rev5 42.0 / 60
3) Gull 3 rev3 35.5 / 60
4) Gull 3 rev4  0.0 / 60


Rev 4 seems to have a serious bug which makes it a whipping boy. Arithmetically rev1 is the winner here but i insisted on rev5 as the best choice for Rapidroid because:
* It won against rev1,
* It's supposed to support tablebases,
* It's the latest one...

Now i'll be waiting for the end of current gauntlets which will finalize the release of February list and then i'll take care of Gull and also Fritz 14.

I hope that next months will bring more competition to the Top-20 of Rapidroid.

November 15, 2015

News from the Rapidroid labs

Unless there are new engines or updates, Rapidroid is a non-stop and unlimited tournament running by divisions, in up runners and down runners basis. There are basically 11 divisions of 9-10 or 11 engines in each and after each round 3 winners go up, 3 losers go down and the roulette turns again.

When there are newcomers to introduce or updates, things slow down and get complicated because the strict rule (Why the heck did i invent it?) tells "Only one version of each engine must be included".
That means any updated engine requires that all previously played games are removed and replayed with the new version. All related PGNs, as well as all previous rankings, graphs, stats etc. must be also updated.

This is exactly an erase and rewind scenario. I know it's hard to do when there are dozens of rounds already played but i'm sure the final result is the most trustable one without rating distortion.

In case of a new engine to introduce, it's still complicated because i have to find the correct division to introduce it without losing the balance between the number of players in each division. It may happen that no place is available in the most suitable division. In such case, the first or the last engine of that round must be ported to the neighbor divison. More and more games to replay!

The number of ECO lines played by each engine must be also monitored and kept well balanced to avoid the effects of sensibility to a given opening category. An engine should not always play ECO A for example.

After October-2014 release, i'm currently working on November rankings which will integrate below changes:

Arasan 18.1 will replace Arasan 18.0 (+20 ELO expected)
Cheng 4.39 arm5 will be replaced by arm7 (+30 ELO expected)

Deep Saros 2.3f arm7 will replace arm5 (+10 ELO expected)

Galjoen 0.31 will replace 0.30.2 (+30 ELO expected)

Maverick 1.5 Leiden will replace Maverick 1.0 (+60 ELO expected)

Floyd 0.6 will be added

Mini Rodent 1.0 will be added (Yes, it may distort for a while but that Mini and Rodent 1.7 will be both gone when Rodent 2 will step in)

Due to above updates, i delayed the introduction of the second dedicated computer Excalibur LCD Express. Hopefully noone would cry out for that :-)

Now, back to digging it deep!

July 5, 2015

Komodo shock in WCCC-2015

IGCA (International Computer Games Association) is not FIDE of computer chess. At least, it's not recognized like this since a while by a large community of chess addicted people.

Let's say the open source era with hundreds of free engines updated zillion times a year has become uncontrollable by such hierarchical organisation. It's clear and obvious that programmers don't like to be directed and depending on authority anymore. If you ask me about human players, i wouldn't reply differently though. After all, it's not a post about FIDE.

For many years, WCCC held by ICGA in various cities around the world had been the only reputable event where top chess programs had competed. Folks aged enough will surely remember those Mephisto, Shredder, Junior, Fritz years.

Today information flows much quicker through countless channels, thanks to unlimited communication tools, live tournament broadcasting, automated games leading to thousands of tests etc. Indeed it's an environment where people don't meet each other in person, less warm, less emotional, maybe too mechanical.

I can never deny the precision brought by TCEC which deserved the Unofficial World Championship status but the human-like tournament ambiance, real chess pieces operated by human hands still remain sympathical to me. I still enjoy following WCCC.

After Yokohama Japan in 2013, this year WCCC moved to Leiden Netherlands, one of the major chess cities. Well, only a single round robin with eight games played by each of nine participants, running on different hardware. Consequently, there can be no scientific basis to prove the winner of such tournament is the strongest of all programs available.

But who cares? It's not a testing lab, just a tourney. The games were fun enough and JOHNNY won that tournament with 2400 cores running in parallel. That's all to tell.
These days, evaluation and search algorithms are so sophisticated and developed that they often count more than the processing power. That's why it's hard for me to see extreme hardware can still overcome programming. Jonny deserves the congratulations here, especially for defeating the mighty Komodo. That game looked like a true breaking point which kept Komodo from winning the tournament. Once again, we have witnessed that surprises increase popularity. Frankly, who would enjoy watching games ending with expected results?
The final standings:

# Name      G W D L Sco    SB
1 Jonny     8 6 2 0 7.0 24.25
2 Komodo    8 6 1 1 6.5 20.00
3 Hiarcs    8 3 4 1 5.0 14.75
4 Protector 8 3 4 1 5.0 14.25
5 Shredder  8 3 3 2 4.5 12.00
6 Ginkgo    8 2 4 2 4.0  9.75
7 The Baron 8 2 2 4 3.0  5.50
8 Maverick  8 1 0 7 1.0  0.00
9 Fridolin  8 0 0 8 0.0  0.00


The games in PGN format: HERE

The everlasting "how to break through a pawn fortress" game won by Jonny against Komodo:

June 29, 2015

Rapidroid welcomes Shredder 1.7.0 for iOS

If would be a shame to forget Shredder when introducing several iOS programs into Rapidroid because it's known to be one of the oldest chess engines still present on the market.

Shredder has been one of the leading commercial engines on Windows platform during years, though its bright times are over now after the arrival of open source era.

Given that the cost is acceptable, i've recently purchased a copy of Shredder on iTunes and have played two rapid games against Hiarcs which looked the strongest iOS program behind Stockfish. Hiarcs has recently reached 2877 ELO on Rapidroid after first pack of 12 games vs 3 other iOS programs and strong Android engines like Robbolito, Gaviota, Discocheck, Grapefruit.

The encounter used the same opening with both sides: Giuoco Pianissimo: 5.d3 d6 variation (ECO C54) taken from round 1 of TCEC-7. Hiarcs easily shredded Shredder with white but the revenge ended in a draw.

However, the second game looked more interesting to me with clear indication of their different positional vs material interpretations. Hiarcs has dived into a courageous sacrifice which could not birng clear advantage.

My first opinion is that Hiarcs remains one step ahead of Shredder. Shredder showed a perfo around 2700 to me. More games in Rapidroid experiment will reveal the truth.

If you own an iPhone of iPad, should you then buy Shredder? Uhm, well, i don't think you can't live without it, unless you're a dedicated bits and bytes collector.

Now, here are the first two Shredder games played at 15 minutes each side without increment.





March 8, 2015

RAPIDROID: Games played so far with full logs

As per a visitor request, i've uploaded all 1556 games played so far in Rapidroid. The file is a 1.4MB 7z archive and contains one PGN with all games and all engine PV logs. I'm not IBM and no reason to hide the logs :-)

I think it can be also useful to check depths reached, terminations etc. and compare across devices. Enjoy!

RAPIDROID GAMES IN PGN

February 28, 2015

Colossus 4.0 C64 finally meets Rapidroid!

I've been waiting since long ago to see that day. Now it's done. At least started...

Rapidroid is an experiment i'd invented in order to remedy the lack of serious chess engine rankings dedicated to Android, the way it's being done since decades for PC programs. However, no matter how accurate it becomes, the fun remains limited if you concentrate it to a single platform.

I believe multiplatform lists are more fun. Even SSDF, after all these recent top engines missing, is still a reference. You like it or not but you can't look elsewhere to see how comparable perform mobile programs vs PC's vs tabletops of the past etc.

Well, okay, but where is Android in SSDF? Nope...

To stop repeating the same question, i'd decided in the very beginning, to introduce a few PC engines as trustable anchors and besides these, especially as much retro programs as i can. I never had the opportunity. And recently in 2015 Jim Ablett compiles have taken away the first spare time chance.

After 8 months of wait, i finally said "Ok. I won't wait anymore and i'll do it right here right now!". The first one i'm finally injecting into Rapidroid is hereby COLOSSUS 4.0, released exactly 30 years ago, acknowledged as the strongest Commodore-64 program ever.
One funny micro-experiment of last year is to remember here. That was Colossus running at 24 hours per game vs Stockfish DD at 15 seconds per game, thus the most unbalanced settings in order to favor the oldie goody against the current World #1. The odds ratio was 24*60*60 / 15 = 5760 times more time for Colossus. Time heals many things but can't recover 30 years. As expected, the two games hadn't last even 50 moves. Paralyzed superchild had easily overcome the turbocharged grand grandfather.

Today, i played 4 games using two different Colossus configurations: One at 100X this time (more and more turbocharge please) and another with usual equal time control vs the weakest Android engine available, which is OliveChess. Olive looks like under 800 ELO, a great tool for all of us to achieve a victory against a computer in chess. Lol.

100X of Colossus might be equivalent to log (100) / log (2) * 60 = almost 400 ELO more, using the very rough and speculative formula. Since i expect standard Colossus to play 1300 ELO, can i expect then 1700 ELO at 100X? Although i strongly doubt about this but we'll see the difference in future Rapidroid lists thanks to the introduction of both configs.

Makes too much paragraphs for my poor fingers. Let's better take a look at the first games vs OliveChess, specially choosen to let Colossus with a victory. Colossus played on VICE emulator while Olive was running on a tablet with Rockchip 3066 dual core processor. For Olive, single, dual, quad cores don't count and only GHz matters. It's a single core engine.
* Time control for Colossus 100X was set to 100 * 10 sec/move = 16 mins 40 secs / move
* Time control for Colossus standard and Olive was set to usual 10 secs / move because Olive can't play with mins / game.
* The opening was the same for all 4 games, an 8 moves Sicilian taken from the first game of TCEC-
* Book and prediction was set to 0 for Colossus.
* Colossus clock was set to 8 x average time setting and not zero, to prevent extra time usage.

GAME-1: OliveChess 0-1 Colossus 4.0 C64 100X


GAME-2: Colossus 4.0 C64 100X 1-0 OliveChess


GAME-3: OliveChess 0-1 Colossus 4.0 C64


GAME-4: Colossus 4.0 C64 1-0 OliveChess

February 25, 2015

WCC to be held in Leiden in July-2015

Five links, hundreds of comments, claims, assumptions, objections, statements...

The related topic at Talkchess
About the World Computer Chess Championship at WIKIPEDIA

My point of view remains the same: Chess is fun. So do all kind of chess tournaments.

January 31, 2015

RAPIDROID: Wheels are turning again

After a lot of updated versions of included engines, i've just started to replay games from the beginning.

Among the toppers, Stockfish 6 in gauntlet vs Black Mamba, Critter, Firenzina, Robbolito, Komodo 8 showed no big changes, except surprising losses against Komodo 8.

Stockfish 6 lost both games to Komodo while it's older bro had won by 1½ to ½ from the same position. It seems the new boy didn't like this opening.



January 16, 2015

Stockfish headbangers: Twin duel SF5 vs SF121014 continued

Just finished a new set of 96 games using all openings from TCEC-6 Stage-4. Now it makes 160 games played by last to versions of Stockfish on two different devices, two different cpu architectures. The experiment may still be subject to critics due to "low" number of games.

As i don't wanna give a chance to any doubt about the result, i will continue with TCEC-6 Stage-3 openings set containing 56 positions, played both sides. After this third lot, two fishes will have played 272 games.

After 160 games played, the error margin is +/- 32 ELO. It's still high for these close rivals. We need to reduce it to less than 20.

Galaxy Note II    Win Draw Los Pts/Gam Score ELO
Stockfish 121014: +31 =107 -22 84½/160 52.8% +20
Stockfish 5     : +22 =107 -31 75½/160 47.2% -20

Asus ME173CX      Win Draw Los Pts/Gam Score ELO
Stockfish 121014: +37 =105 -18 89½/160 55.9% +42
Stockfish 5     : +18 =105 -37 70½/160 44.1% -42

Conditions:
GUI used is Chess for Android v5.0.1
15 sec/move, ponder and TB off
Hash set to 256MB on Note II and 128MB on Asus
Note II runs Exynos 4412 @ 1.7 Ghz x 4 cores
Asus runs Intel 3745D @ 1.86 Ghz x 4 cores