Talk:Computer chess

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Chess (Rated C-class, High-importance)
WikiProject iconThis article is within the scope of WikiProject Chess, a collaborative effort to improve the coverage of Chess on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 C  This article has been rated as C-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
Note icon
This article is in the list of selected articles that are shown on Portal:Chess.
WikiProject Computing / Software / CompSci / Early (Rated C-class, Mid-importance)
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (marked as High-importance).
Taskforce icon
This article is supported by WikiProject Computer science (marked as High-importance).
Taskforce icon
This article is supported by Early computers task force (marked as High-importance).
 
Things you can help WikiProject Computer science with:

Criticism[edit]

1) There is a lot of detail in the article, yet not enough distinction IMHO between "chess monsters" such as Hydra and Deep Blue (which are not mere pieces of software you're installing from a CD) and sofware such as Fritz or Fruit which don't require special hardware to work on your PC.
2) As of Feb 2008, I don't know of any chess software/engine benchmarked against humans in tournament play. Websites and magazines boast Elo (presumably FIDE) ratings of 2800 or over 3000 in order to win adepts and sell their stuff, but there is absolutely no basis for this unless the engines are calibrated in tournament play under standard conditions (and I don't mean an one-time, five games match against a particular grandmaster). Consequently, the rating given by the likes of Fritz and Chess-master (under "rated play" mode) are also meaningless.
3) There was (and still is) a huge hype around the strongest software, to the point that the average Joe spends his money for something he'll never need, just because that engine defeated the former world champion or God knows how many machine-vs-machine matches. What was forgotten in all the hubbub (and to a certain degree in the article) is the same problem chess software had 10 years ago: they can defeat grandmasters, yet they can't play chess. No, seriously... they are absolutely helpless at simulating human-like behavior, or, if you want, make "intelligent mistakes" particularly when asked to lower their strength to a range roughly between 1200 and 2000 (which, ironically, represent the vast majority of beginners, intermediate players and chess enthusiasts). Don't ask for sources, just go on and play! Or read Steve Lopez's article at Chessbase.com 81.96.124.212 (talk) 03:17, 18 February 2008 (UTC)[reply]
The topic of computer chess Elo ratings and the missing comp-human calibration is covered in the "chess computers" article: http://en.wikipedia.org/wiki/Chess_computers#Chess_engine_rating_lists
--80.121.15.73 (talk) 11:35, 6 January 2009 (UTC)[reply]

Since there wasn't any reaction to this section, I have to assume that users either a) bear no objections to the criticism and/or b) find it uninteresting. Either way, I'll take the liberty to include a small paragraph in the article about various shortcomings of computer chess. 81.96.124.212 (talk) 21:45, 21 February 2008 (UTC)[reply]

Horizon Effect[edit]

I was once bought a book called "How to beat your chess computer" which detailed the horizon effect and how to exploit it against a computer in an entire chapter. The basic theory is this. You play a series of moves, which after 8 moves leaves the computer with a significant material advantage, then it's your move again. You take his queen, because the computer only looked 8 moves ahead. Obviously, after the following move, it would see the queen threat and compensate, but it's still a significant weakness of type-A chess software. Is this worthy of addition? —Preceding unsigned comment added by 90.197.21.209 (talk) 01:55, 27 April 2008 (UTC)[reply]

In modern engines it's a non-issue. Quiescence Search is the answer. In addition, any notable engines of today have a far higher depth than 8 (and usually go beyond 16, if you meant 8 "full moves" rather than 8 plies. Nczempin (talk) 15:22, 2 September 2008 (UTC)[reply]

Gruenfeld playing in 1988?[edit]

The "Chronology of Computer Chess" lists Deep Thought sharing first place with Tony Miles, and beating several highly ranked players. One of those players is Ernst Gruenfeld. According to his Wikipedia page, he died in 1962.

Did he come back from the dead, or is his Wikipedia page wrong? —Preceding unsigned comment added by 98.208.35.248 (talk) 18:54, 2 July 2008 (UTC)[reply]

Perhaps it was a different Gruenfeld; it's hard to verify given that there are no references. I removed Ernst Gruenfeld; whoever added him can provide a reference to the Gruenfeld that allegedly took part in the tournament. —Preceding unsigned comment added by Nczempin (talkcontribs) 15:27, 2 September 2008 (UTC)[reply]

A section on the effect computers have on human players[edit]

Should there be a section in the computer chess article on the effect chess computers have had on human vs human chess. I can think of 2 things. 1) A lot of preparation for tournaments, analysis of positions and analysis of entire games are done by computers. 2) Cheating in tournaments by using a computer is a problem. Claims of cheating have been made at all levels from master tournaments to world championship tournaments. Mschribr (talk) 19:49, 15 August 2008 (UTC)[reply]

Do you have references on the effects? Otherwise it's Original Research. Nczempin (talk) 15:19, 2 September 2008 (UTC)[reply]
I don't have a particular reference now, but I think Mschribr is right. I recall from several recent interviews with grandmasters I've read, that almost everone of them mentioned how important and influental computers have become, for them. - A 3rd example would be correspondence chess, where it is not a secret that it's human-guided computer chess now, more or less. See also
http://en.wikipedia.org/wiki/Correspondence_chess#Computer_assistance
Another effect of computer chess (in a more general sense) has internet chess software, to play against people form around the world, by using chess servers. This is missing in the "Other Software" chapter, but has it's own article: http://en.wikipedia.org/wiki/Chess_server
These servers certainly have an impact on many (casual) player's chess activity, and are also used for broadcasts of chess tournament games. For that, the servers themselves are my info source:
http://en.wikipedia.org/wiki/Category:Internet_chess_servers (I think ICC, Playchess and FICS are the biggest) --80.121.15.73 (talk) 11:28, 6 January 2009 (UTC)[reply]

Image copyright problem with Image:Fritz 8.JPG[edit]

The image Image:Fritz 8.JPG is used in this article under a claim of fair use, but it does not have an adequate explanation for why it meets the requirements for such images when used here. In particular, for each page the image is used on, it must have an explanation linking to that page which explains why it needs to be used on that page. Please check

  • That there is a non-free use rationale on the image's description page for the use in this article.
  • That this article is linked to from the image description page.

This is an automated notice by FairuseBot. For assistance on the image use policy, see Wikipedia:Media copyright questions. --10:47, 11 September 2008 (UTC)[reply]

Ehlvest[edit]

Why is there no mention of this Estonian Grandmaster's loss in an Odd's match against a computer?--ZincBelief (talk) 11:45, 18 September 2008 (UTC)[reply]

It's in Human-computer chess matches. Peter Ballard (talk) 00:59, 19 September 2008 (UTC)[reply]

Computers overtake humans in 2006[edit]

The Wikipedia article says “Computers overtook humans as the best players in timed games around the year 2000”. I think the year should be changed to 2006 when Fritz beat world champion Kramnik. Before 2006 world champions Kasparov and Kramnik played 3 computer matches and all were tied. Are there any objections? Mschribr (talk) 22:16, 2 December 2008 (UTC)[reply]

I think 2006 is too late. Remember Hydra crushed Adams in 2005. I'd say computers overtook humans sometime between 1997 and 2005 (inclusive), but there were so few matches that a conclusive answer is impossible. The 2006 Kramnik match proved that the overtaking had happened, but it doesn't mean it happened in 2006 - it happened earlier. Peter Ballard (talk) 02:02, 3 December 2008 (UTC)[reply]
Agreed, and at any rate I think the sentence needs to be either general, saying that computers overtook humans and giving a rough estimate, or specific, saying that a computer beat the reigning world champion in a match and giving the exact year. A mix of the two would be a little silly. And certainly, 1997, 2005 and 2006 are all "around the year 2000" anyway. -- Jao (talk) 14:01, 3 December 2008 (UTC)[reply]
Any year before 2004 is incorrect because in 2002 and 2003 world champions Kasparov and Kramnik played 3 computer matches and all were tied. The computer did not overtake humans.
2005 is better than 2000 because it is closer to the real year. But using the 2005 hydra vs. Adams would be an extrapolation that a world champion would lose to hydra. It is an educated guess not an actual loss like the 2006 loss of Kramnik. I think Wikipedia should include facts not guesses. The fact is the Kramnik loss in 2006 made it obvious computers have overtaken humans. Mschribr (talk) 18:37, 3 December 2008 (UTC)[reply]
The drawn matches after 1997 don't change the historical role of Kasparov's defeat against Deep Blue (II). That was the turning point. Maybe "computer" is too general here, as Deep Blue had huge, special hardware while the opponents in the later matches (except Adams-Hydra) used generic PC hardware, only. It's also a question of definition; what means "overtake"? Does it mean the capability to win a regular match with long time controls against the strongest human - that was proven 1997 - or is it a kind of flip-flop thing, so that as soon as any computer loses a similar match, we'd suddenly say no, comps did not overtake actually, and three weeks later maybe a comp wins again, etc... Or does it mean the absolute certainty that a (certain) computer can always win a match like that? That wouldn't be verifiable, because the result of the next match is always unknown.
I'd vote for 1997. --80.121.15.73 (talk) 10:55, 6 January 2009 (UTC)[reply]
What historical role did the Kasparov's defeat against Deep Blue have? It received a lot of publicity because of ibm’s advertising department. But there were to few games in a close match to draw conclusions about deep blue strength. There were 6 games with a score of 3.5-2.5. We can not say that deep blue was the stronger player. Deep blue's strength is unknown.
If deep blue had continued to play a total of 20 games with rated players then deep blue could be rated. If deep blue’s rating was higher than Kasparov's rating then we can say that the computer overtook humans. But deep blue never played enough games to achieve a rating. Ibm quickly dismantled deep blue. Deep blue never played again.
In 2002 and 2003 the 2 strongest players Kasparov and Kramnik played 3 computer matches. The results were 3 drawn matches. We can say the computer is equal to the best human but did not overtake humans.
But in 2006 because of the conditions under which the fritz computer beat Kramnik we can say the computer did overtake humans. If there are no objections I will change the year to 2006. Mschribr (talk) 18:40, 24 March 2009 (UTC)[reply]

Wiktionary: Computer chess[edit]

There is a Request for Deletion at Wiktionary for the word/term "computer chess" if you want to participate go here. Green Squares (talk) 15:42, 28 December 2008 (UTC)[reply]

Recent additions[edit]

Since my signature didn't appear in the note where I undid the removal of my contributions (Using endgame databases, Other chess software), I'll add it here. I was somewhat surprised that they had simply been removed completely. Maybe other knowledgeable computer chess experts or fans can acknowledge what I've added.

As for the tablebase slowdown effect, which can happen with improper configurations, that is by far the most important possible downside of using tablebases. The other things mentioned in that chapter are rather theoretical, and from very small practical relevance, only.

I think we don't need to discuss if opening training (!, not just learning) software exists? --91.113.0.160 (talk) 17:56, 4 January 2009 (UTC)[reply]

I'm removing it again until it's sourced. First, I doubt this has any significant effect. Database accesses are very quick. Second, even if it had a small effect it seems likely to be too minor to warrant mention. 24.177.121.141 (talk) 06:33, 5 January 2009 (UTC)[reply]

I'm not going to get into this argument, but it might be worth taking a look at http://www.horizonchess.com/FAQ/Winboard/weaktablebase.html where negative effects of tablebases are discussed, with some experimental evidence (which unfortunately doesn't distinguish between the different possible negative effects). But "database accesses are very quick" is clearly wrong, I'm afraid; disc accesses are absolutely *not* very quick compared with any reasonable quantity of computation. If your total disc latency is 10ms and your CPU clock is 2GHz then one random disc fetch costs you 20 million cycles. You can do a lot in 20 million cycles. (It would be interesting to see whether tablebases have less positive / more negative effect on engines that generally calculate faster -- trading deeper search for less accurate leaf-node evaluation. I don't know whether anyone's done that.) Gareth McCaughan (talk) 16:55, 5 January 2009 (UTC)[reply]

Gareth is right; this is a topic much different to "normal database" accesses. In an endgame position, an engine may want to access tablebases hundreds or thousands of times per second, while at the same time running a calculation process with full cpu load. Hence, the access times matter a lot. All computer chess fans experienced in using tablebases, know the slowdown effect, and many of them use such other, faster storage devices which I've mentioned, to minimize it.
How can an article be improved if editors don't trust experts? Sorry, I do not quite understand "sourced". What I describe is general computer chess knowledge, like that you'll get wet in the rain if you have no umbrella. How to "source" such an information?
Also, what is wrong with mentioning opening training software? This is a remarkable sector among other chess software. Do I need to create a list of examples, programs, companies and websites to prove that? I am not Google. - I smell a somewhat strange subculture here. I thought Wikipedia is to collect and distribute useful information, not to exclude such information. --80.121.24.246 (talk) 20:49, 5 January 2009 (UTC)[reply]
Wikipedia trusts experts. All you need to do is reference a reliable and verifiable source which supports your additions. You should probably read up on the Wikipedia guidelines as well. (I hope you understand that anonymous editors such as yourself cannot be considered experts; see WP:OR.) The opening training software mention is probably fine; try adding it separately from the controversial topic (with references). --IanOsgood (talk) 21:14, 5 January 2009 (UTC)[reply]

Comparing humans and computers[edit]

What can we say about computer chess vs. humans? I tried to add

The top rating ELO ever was that of Garry Kasparov at 2851 while the estimated rating of the 2008 Swedish Chess Computer Association winner — Rybka on an up-to-date PC workstation — was roughly 3200.

and User:IanOsgood ripped it out with the comment "both incorrect and WP:OR". OK. What can we say about comuputer chess vs. humans. Please look at Computer Chess: The Drosophila of AI and at this diagram where the author, L. Stephen Coles, has pages to say about the direct comarison of humans and computers. What can say in terms of some parameter (like ELO) that the reader can quickly relate to? I am aware that a down-to-the-individual-point comparisons of ELO are not 100.000000% valid among different "pools", which is why my statement uses terms like "estimated" and "roughly" with scores rounded to the nearest 100 points. But that is not good enough for IanOsgod. OK. Then what is good enough?--Spellage (talk)

From what I read of the history the bit removed by User:IanOsgood is '(implying an ELO rating of perhaps 2400 or better)'. The Garry Kasparov articles say 2851 Elo is the highest, although you might like to say it's the highest so far (as of February 2009) as it maybe improved upon at some point in the future. SunCreator (talk) 01:24, 16 February 2009 (UTC)[reply]
The FIDE and SSDF rating pools are not calibrated to each other. By mentioning a rating from FIDE and a rating from SSDF, you are implying that they are comparable which they are not. As a thought experiment, the SSDF list could just as likely have been calibrated so that the top rating would lie at 2500. In this case, would you still have found it noteworthy to mention the 2850 FIDE rating and the 2500 SSDF rating? Until FIDE decides to start allowing computers into FIDE rated events again, there is little you can say about human/computer ratings in a Wikipedia article. --IanOsgood (talk) 03:17, 16 February 2009 (UTC)[reply]
When the SSDF started rating computers the first computer ratings were from games played between computers and human members of the SSDF. All subsequent games only included computer vs computer games. So the SSDF calibration starting point is the same as FIDE. The SSDF ratings are an approximation of the FIDE ratings. So a SSDF 2700 should be near a FIDE 2700. Mschribr (talk) 21:24, 24 March 2009 (UTC)[reply]
There is a another issue that we should try to point out: that now that the "best human chess player in the world" (whomever that is) can be beaten by a machine, one has to wonder if any human will be willing to devote their lives to chess the way the current crop of champions have. Chess has centuries of grand tradition and there is already commentary that the interest in human vs. computer chess has already started to wain, but I have to wonder if human interest in the game will not be diminished in a such a way that few or no human will have the motivation (let alone the talent) to to Kasparov. I am not saying that human will not continue to play each other; but the motivation at being the "best in the world" has lost some of its charm. It would also be nice to inform the reader of:
  • What can I download today and play on my typical PC and is there some sort of ELO rating for that software?
  • What is that rating for the best open-source and what is that rating for the best closed-source downloads?
  • If I am willing to spend more money (say, in the realm of $5000 for commercial software or a better PC), what can I expect in terms of better ELO?--Spellage (talk) 01:49, 16 February 2009 (UTC)[reply]
The chess engine article should provide answers to your first two questions. There are several chess engine rating lists available for comparing engine/hardware/time control combinations against each other. (FYI, Wikipedia talk pages are not help forums, though there is a reference desk.) --IanOsgood (talk) 03:17, 16 February 2009 (UTC)[reply]

Section On Players Opposed to Computers[edit]

Should there be a section in the computer chess article about players opposed to playing computers in tournaments? I remember players protesting when computers were permitted to play at a tournament. There was a national tournament in Europe. Some players forfeited their games rather than play a computer. At the same time some top players like Kasparov did not mind playing a computer. Then there are some chess players that want to stop development of computer chess programs. They think computers will take the fun and mystery out of playing chess. People will stop playing chess when computers become too strong. Mschribr (talk) 03:17, 7 July 2009 (UTC)[reply]

See Luddite or Intelligent Design or whatever. Chess has perhaps lost some of its charm forever because the machines have clearly exceeded all living players. You could argue that the Bailey–Borwein–Plouffe formula has taken the charm out of computing digits of pi because the formula was determined not by formal reasoning but by computer search.(See Quest for pi para begins "However, this derivation is dishonest,..." page 11). I understand why FIDE refuses be the one to rate humans and machines side-by-side. If they do not, then somebody else will and then humans can sniff that "It's not FIDE". Is there an organized effort beyond that to suppress computer chess or is it just a bunch of players each having their own spiritual crisis? If it is the latter, and there is no other supporting documentation/studies on such effects, then anything added to the article would be little more than commentary.--76.200.190.35 (talk) 20:04, 29 July 2009 (UTC)[reply]
Do we allow forklifts in weightlifting competitions? Even a modestly powerful forklift can outlift the strongest human. We don't enter cars in track events. If the organizers of tournaments want to include computers in their tournaments, that's their business. However, just as weight lifting or running is a human endeavor, so is playing chess.--RLent (talk) 20:15, 21 August 2009 (UTC)[reply]
If the forklift or the car is rated near the level of humans then we should let them compete. Probably the first forklift built and the first cars built. Once the machine or computer is proven too strong for the human then no more competitions. Until about 5 years ago computers would not have been too strong for many chess tournaments. Today we have passed that point in chess so no more computers in human tournaments. Instead of a car in track events have robots like Asimo run in track events. Also instead of a car in track events have a computer driven car like the Carnegie Mellon University driverless car Boss in car races. Today Asimo and Boss would both lose. But in 30 years their descendants would win. After they win we ban robots from track events and ban computer driven cars from car races. Mschribr (talk) 22:59, 23 August 2009 (UTC)[reply]
There was an organization. I don’t recall the name. It was something like WOCIT We Oppose Computers In Tournaments. I think they were vocal in 1980s and 1990s. Maybe somebody else can recall the name better than me. They claimed computers cheat and do not play chess. Mschribr (talk) 21:00, 29 July 2009 (UTC)[reply]

Performance Measures of Chess Programs[edit]

I don’t see performance measures discussed for specific programs in the computer chess article. Measures such as positions per second, size of opening book, size of endgame tablebase, ratings and other measures. These measures are not stated for notable chess programs such as rybka 3 , fritz 10, deep blue, belle, chess 4.6, cray blitz and other programs. Then we can see how these measures affect programs and changed over time. Should this be added to chess article? Mschribr (talk) 22:43, 9 November 2009 (UTC)[reply]

See the chess engine article for specifics like this. --IanOsgood (talk) 01:40, 10 November 2009 (UTC)[reply]
The chess engine article does not give any specifics. It does not give numbers for positions per second, size of opening book or size of endgame tablebase for any particular engine. If it is more appropriate we can discuss this on chess engine talk page. It would be interesting to compare the numbers for Cray blitz that won the Mississipi State Championship to Hitech that won the Pennsylvania State Chess Championship to Junior that scored 50% in the Dortmund tournament et cetera et cetera et cetera. Mschribr (talk) 17:38, 10 November 2009 (UTC)[reply]

Human vs Computer Chess[edit]

No place on this article states that mathematically, in theory human would not be able to defeat a computer (a computer with the required processing power to calculate all possible moves to win). Do you not feel that it is necessary? —Preceding unsigned comment added by Shahee.saib (talkcontribs) 03:11, 7 March 2010 (UTC)[reply]

Your edit (quoted below) does not make sense.

It is impossible for a human to beat a completely configured computer at chess even if the human is able to attain the same processing power of the computer opponent. The result of the game will probably result in a draw or loss to the human.

What is "completely configured"? What does it mean for the human "to attain the same processing power" as the computer? Humans can still beat computers sometimes, so the last sentence is not quite correct. Bubba73 (You talkin' to me?), 03:44, 7 March 2010 (UTC)[reply]
I agree, my wording is not perfect. Although, mathematically, computer chess only contains a finite number of possible moves, the problem we have today is that pc's arent powerfull enough to compute that high number of moves. Theoretically, if a computer was able to compute all those moves (Chess will then be solved), it would be (mathematically) impossible for a human to beat the computer. Reason why I'm putting this here, is that I am an AI student doing research on Chess. When doing reading up on this site, I noticed that this (impossible for human to defeat "completely configured" computers). The reason why humans sometimes beat computers is due to the computers difficulty level being set low, or the computer processing power not being enough. I strongly believe that this should be mentioned in this article. —Preceding unsigned comment added by Shahee.saib (talkcontribs) 03:56, 7 March 2010 (UTC)[reply]
I think that what you are saying there is already started, or at least implied, in the "Solving chess" section. Bubba73 (You talkin' to me?), 04:08, 7 March 2010 (UTC)[reply]
I don't think it is implied, it more implies that the possibility of humans beating computers will always exist until Chess is solved. Chess being solved is a matter of processing power, no? It may be started but not exlicitly stated. —Preceding unsigned comment added by Shahee.saib (talkcontribs)
If you want to, ask for opinions on the talk page of the chess project: Wikipedia talk:WikiProject Chess. Bubba73 (You talkin' to me?), 05:03, 7 March 2010 (UTC)[reply]
I shall do that, thanks. In the meanwhile, how is the following wording: In theory, a computer will always beat or draw a human at chess, assuming that the computer has the processing power to perform the required calculations and the human brain at best can match the processing ability of the computer opponent. This is due to the finite number of moves involved in a chess game and zero probability of the computer making a mistake. —Preceding unsigned comment added by Shahee.saib (talkcontribs)
The above was copied from my talk page. Bubba73 (You talkin' to me?), 16:54, 7 March 2010 (UTC)[reply]
My thoughts:
  1. The suggested addition/edits appears to be orginal research
  2. Computer Chess is not mathematically chess or theoretical chess. By that I mean there is a practical difference both in software and hardware interpretation of the theoretically possibilities. Shahee.saib seems unaware of the flaws of solving chess based on those issues in an article that is suppose to be about computer chess and not theoretical chess.
  3. A (better?) section on the theoretical aspect of solving chess exists at First-move_advantage_in_chess#Solving_chess. Creating a Solving chess article is perhaps to be considered? Regards, SunCreator (talk) 15:52, 7 March 2010 (UTC)[reply]
A Solving chess seems like a good idea to me. Bubba73 (You talkin' to me?), 17:34, 7 March 2010 (UTC)[reply]
I think the overwhelming majority of people agree it should not be in the Wikipedia. While the discussion is taking place removed it from Wikipedia. Mschribr (talk) 19:21, 7 March 2010 (UTC)[reply]

Solving chess - new article suggestion[edit]

Discussion for creation of new article from sections in Computer chess#Solving_chess and First-move_advantage_in_chess#Solving_chess. Regards, SunCreator (talk) 21:51, 7 March 2010 (UTC)[reply]

I agree with splitting out the section to make it an article. It should be covered and doesn't quite fit with either computer chess or [[First-move advantage in chess. Bubba73 (You talkin' to me?), 21:55, 7 March 2010 (UTC)[reply]
I think solving chess belongs in the computer chess article. We would program a computer to solve chess which is what computer chess is about, programming computers for chess. I do not think there is enough information to make a separate article. Mschribr (talk) 18:48, 8 March 2010 (UTC)[reply]
Programming a computer to play a game of chess is quite different from solving chess.Bubba73 (You talkin' to me?), 20:00, 8 March 2010 (UTC)[reply]
They are similar. In both cases, you need to follow the rules of the game, how the pieces moves, no illegal moves. You need to evaluate a position to see if it leads to win, draw or loss. The main difference is time. To play the computer needs to manage its time. To solve the game it does not. There are different levels of solving, ultra-weak, weak and strong. Ultra-weak does not look at every position. Mschribr (talk) 21:01, 8 March 2010 (UTC)[reply]
Computer chess is mainly about playing a game in real time. Solving chess is different - there is no real opponent and it wouldn't be done in real time. Bubba73 (You talkin' to me?), 21:47, 8 March 2010 (UTC)[reply]
Nevertheless, to solve chess the computer still needs to move the pieces to get the next position and evaluate the position is win, loss or draw the same as in playing the game. Of course there is an opponent. There are 2 opponents white and black. When the computer is solving the game the computer is trying to see does white win lose or draw the same as playing. Of course it is timed. Chess is such a computationally challenging game that if it is done inefficiently then it will not be practical for the computer to solve the game in a reasonable amount of time. The only thing missing is for computer to manage its time. In fact, many chess playing programs have an infinite mode. This tells the computer to find the best move and not stop until it finds the move. It is the same as solving the game. It is used for deep analysis done over night, over the weekend or longer. Mschribr (talk) 01:01, 9 March 2010 (UTC)[reply]
I disagree. Chess is too far from being solved to have an article about it. Overall prospects of solving chess can be discussed in Solved game. Various technics applied towards that goal can be discussed in appropriate existing articles: forward search techniques and playing strength increase in Computer chess and Chess_engine; Solving endgames in Endgame tablebase; Solving small board variants in Minichess. Also, creating such article should logically be followed by adding "Solving checkers", "Solving Shogi", "Solving Go", etc. articles, which will bring unnecessary fragmentation. Also, the current "Solving Chess" section essentially sais "Solving chess is very very hard, no one has any idea when or if it will be possible" - do we need a whole new article for this kind of insight? 333ES (talk) 00:50, 9 March 2010 (UTC)[reply]
I agree with 333ES. Mschribr (talk) 01:01, 9 March 2010 (UTC)[reply]
Even though chess is far from being solved (as you say), the principle can be discussed (as well as the prospects). I think it is a bit out of place in this article and First-move advantage in chess. Bubba73 (You talkin' to me?), 01:48, 9 March 2010 (UTC)[reply]
The "Solving Chess" section currently does not have enough substance to justify its own article. Endgame tablebases are already covered in Retrograde analysis and Endgame tablebase articles in good detail. General strength increase is discussed in Computer chess and Chess engine. Chess game complexity is mentioned in Game complexity. So, what is left for "Solving chess" is a number of quotes from various people about the possibility of solving chess. They generally agree that solving chess is theoretically possible (obsiously) but practically not feasible in foreseeable future (also obviously). Solved game appropriately summarizes this in one sentence: "The question of whether or not chess can be perfectly solved in the future is controversial." I agree that "Solving Chess" section is out of place in First-move advantage in chess. I suggest to merge these two sections, and leave it as section only in Computer Chess. If / when any progress is done towards solving chess, other than just constructing more tablebases, then the topic may get interesting enough for its own article, IMHO. 333ES (talk) 03:23, 9 March 2010 (UTC)[reply]
A Solving chess article could bring all of those elements together. My opinion is that the duplication would not be a problem. Each of those elements has an article that stands on its own. For instance, tablebases can be used in solving chess, but they have other uses as well. Bubba73 (You talkin' to me?), 04:09, 9 March 2010 (UTC)[reply]
I'm not convinced, but I've said enough, so I'll let others speak up. 333ES (talk) 15:23, 9 March 2010 (UTC)[reply]
Solving chess is pretty much a separate issue from the advantage of the first move - unless that advantage is enough to force a win. The more I've thought it through, the more that seems clear to me. It is also quite separate from computer chess. Computers would probably be used in any solution of chess, but that is not what computer chess is about. Bubba73 (You talkin' to me?), 20:33, 19 March 2010 (UTC)[reply]
Solving chess seems to me to be closely related to the First-move advantage in chess. If chess were solved, that would resolve the debate about how significant (if any) the size of the first-move advantage is. For example, if it were determined (say) that after 1.d4 the Dutch Defense and Queen's Gambit Accepted lose, but that the Nimzo-Indian Defense and certain Queen's Gambit Declined lines are sufficient to draw, that would indicate that the first-move advantage is significant, but not sufficient to win with best play. My view regarding the section's relevance is buttressed, I think, by the fact that First-move advantage in chess passed A-class, GA, and FA review without anyone saying that the "Solving chess" section didn't belong in it. In addition, like 333ES and Mschribr, I question whether "Solving chess" has enough content at this time to warrant a separate article. Given all that, and the lack of a consensus for moving the section, I think that it should stay where it is. Krakatoa (talk) 21:48, 19 March 2010 (UTC)[reply]
Solving chess and first move advantage are very different - practically two sides of the coin. First move advantage is mainly about whether or not the initiative of having the first move translates into an advantage (+-, +/=, or +/-) in an actual game. Enough to give one player a better chance of winning. Solving chess would be to determine optimal play from the first move to the last and would not resemble actual game play. In fact, chess could be solved "weakly" and not say anything about what the optimal moves are (just whether white or black has a forced win from the starting position or if optimal play is a draw). And even if chess was solved and determined to be a draw, that doesn't mean that one side or the other doesn't have an advantage from the first move - it just means that the advantage doesn't lead to a forced checkmate.
If it was solved in detail, a person would not be able to make those moves unless he was allowed to look them up in the tablebase. Similarly, computer chess is very different. If chess were solved and a computer was playing chess, there would no longer be a chess engine - the program would simply look up the position in the astronomically-large tablebase.
Solving chess means determining whether or not either side has a forced checkmate from the starting position. That has little bearing on whether or not one side or the other gets an advantage (+/= or +/-) as a result of moving first. Bubba73 (You talkin' to me?), 01:22, 22 March 2010 (UTC)[reply]
It seems highly relevant to me. I think we may be reasonably sure that Black will not be determined to have a forced checkmate from the starting position. If White is proved to have a forced checkmate from the starting position, that is extremely relevant to the article: it would prove that Weaver Adams and Rauzer, who who claimed that White has a forced win, were right. If it is proved that White does not have a forced win, but that Black has to follow a relatively narrow path in order to hold the draw, that would also be relevant to the article. (That would be consistent, broadly speaking, with Berliner's contention that White has a forced win in a number of major openings and that only a limited number of defenses may be playable.) If, on the other hand, it is shown that a large number of defenses are playable for Black (i.e. do not lose), that would tend to support Adorjan's claim that "Black is OK!" and that White's supposed advantage is a chimera. Krakatoa (talk) 03:33, 22 March 2010 (UTC)[reply]
Solving chess is a computer science issue and has little to nothing to do with chess. The first move advantage is a chess issue. Bubba73 (You talkin' to me?), 05:04, 22 March 2010 (UTC)[reply]
The Chess Wiki has some information in their article about solving chess which is not contained in this article. For example, they mention a "final theory of chess" wiki. It seems unlikely that anyone can solve chess by building a wiki full of computer analysis, however, I wonder if it deserves mention in an article about computer chess. References - [1], [2], [3], [4], [5], etc. 65.81.143.164 (talk) 20:34, 3 June 2010 (UTC)[reply]


Assistance needed to correct deceptions about Computer chess[edit]

Greeting fellow computer chess enthusiasts, I need your help to fix the deceptions at the Arimaa article. Also see talk:Arimaa. More than 1 person is removing correct information. I am trying to add the following to the arimaa article: 1) Chess is harder than Arimaa because chess has higher State-space complexity than arimaa. 2) Chess programs are more successful then arimaa programs in beating humans because computer chess had thousands more programmers working 40 more years than Arimaa programmers. They say chess programs are more successful then arimaa programs because chess is easier then arimaa. 3) They say humans are better at Arimaa than computers because humans won the Arimaa challenge. However, the Arimaa challenge is biased against the computer. For example, the Arimaa challenge rules say the human may win fewer games than the computer and still win the arimaa challenge. See http://arimaa.com/arimaa/challenge/2009/ Mschribr (talk) 12:36, 28 March 2010 (UTC)[reply]

I think you are certainly right about #2 - some people spend a major part of their career working on a chess program. I don't know about #1 or #3. However, the problem may be that you need references for those things. Bubba73 (You talkin' to me?), 14:44, 28 March 2010 (UTC)[reply]
1) It's not obvious to me that chess has higher space complexity. For example, chess pawns can only be located on ranks 2 to 7, while arimaa's pawns (rabbits) can be anywhere on board. Also, in chess positions where king is under check are illegal, in arimaa there is no such limitation. If chess indeed has higher space complexity, you need to post a link to a proof, available somewhere else (since Wikipedia is not a place for original research).
2) What Arimaa article sais that is that Arimaa is harder to make computer program for, compared to chess. Due to Arimaa's huge branching factor, this might well be true. For example, Go and Shogi have huge branching factor, they are much more popular than Arimaa (and thus many people are writing programs for them), yet the strongest computer programs are still much weaker than strongest humans in those games. Also, while I agree that Arimaa has much fewer programmers than chess, it also has much fewer human players than chess. So, it's a more complex picture, you can't explain it by just one factor.
3) I agree that challenge is a very weak support to Arimaa's complexity myth.
I suggest to move this discussion to Arimaa talk page where it belongs. 333ES (talk) 14:52, 28 March 2010 (UTC)[reply]
The same problem exits at the article for deep blue chess computer. My goal was to continue the discussion at talk:arimaa. Now continue also at the talk:deep blue article. We see chess has a higher State-space complexity than arimaa from the Wikipedia article State-space complexity. Chess is 10^50 and Arimaa is 10^43. The reference given was Wikipedia. Nevertheless, the statements were removed. The Arimaa article says armiaa is harder than chess but there is no reference for this. There are fewer arimaa players. However, it is easier to transfer experience for humans from a other games such as Backgammon to arimaa. Mschribr (talk) 17:41, 28 March 2010 (UTC)[reply]
I'll try one more time. Consider these two statements: 1. "Arimaa is harder than chess" 2. "Arimaa is harder than chess to play for computers". You seem to be arguing against the 1-st. (I've no idea what it even means). Arimaa article is arguing for the 2-nd. The second statement basically means "Arimaa computers are less successful than chess computers when playing against humans", and all up to date evidence supports this claim. It has nothing to do with Deep Blue, but everything with ordinary desktop chess engines, which were on par with humans since like 10 years ago, and superior since 5 years ago. Chess may or may not have greater state-space complexity, but chess engines of today are without any doubt much superior to strongest humans. We can't say anything like this about Arimaa. You may want to compare Arimaa to Shogi instead, where humans are also still stronger than computers. —Preceding unsigned comment added by 333ES (talkcontribs) 03:41, 31 March 2010 (UTC)[reply]
I am arguing against your statement 2. I am taking your statement “Arimaa is harder than chess to play for computers” and clarifying it. I am replacing it with these are two statements.
  1. Arimaa computers are less successful than chess computers when playing against humans.
  2. Successful arimaa computers are harder to build than successful chess computers.
Statement 1 is observed to be true. The arimaa article says statement 1 proves statement 2 is true. However, statement 1 is true because of the computer chess had 40 more years and 1000s more programmers than computer arimaa. If arimaa computers are not as successful as chess computers it does not mean computer arimaa is harder than chess. It means computer arimaa has put in a small effort which has not produced success. When computer arimaa has the equivalent number of years and number of programmers as computer chess and then arimaa computers still lose to humans then we can say arimaa is harder than chess for the computer. However, until then we do not know.
There is more than one way to compare how hard one game is to a second game. 1) The game with the higher state-space complexity is the harder game. 2) The game with more levels or titles is harder. 3) How many years on average did it take the player with percentile rank to reach that rank? 4) How many years does it take the computer to beat the world champion. The harder game will have higher numbers for all of these.
Chess does have a greater state-space complexity than arimaa. Chess 10^50 higher than arimaa 10^43. We also cannot compare arimaa to shogi. Humans are stronger than computers at shogi because shogi is harder than chess. A bigger effort of about 10 more years is needed for computer to beat the beat shogi players. Mschribr (talk) 16:54, 8 April 2010 (UTC)[reply]
Yes, chess has more programmers, so chess programs are stronger. However I don't understand why you are forgetting (or ignoring?) that chess also has more human players? Human chess has hundreds of years of history. Thousands of books were written about various chess-playing skills. Human chess was refined and perfected by many generations of masters. So, while computer arimaa is behind compared to computer chess, human arimaa is also behind compared to human chess. Which one is more behind? Hard to say, but I'd guess human arimaa is more behind. You have to do an honest comparison, not just take one half of the story, which supports your theory, and throw the other half which contradicts it.
You observe that arimaa has slightly higher state-space complexity than chess. Then you suddenly jump to conclusion that this necessarily makes arimaa harder to play for computers. I think the logical connection is missing. A game with higher state-space complexity may be harder to play for computers when other parameters are equal, but in arimaa-chess comparison they aren't. For example, you seem to forget (or ignore) the branchnig factor. The average branching factor in arimaa is much larger than in chess. Unlike state-space complexity, branching factor has immediate strong impact on relative strength of computers.
The other criteria you suggest (number of titles, number of years to reach rank, number or years taken by computer to beat the world champion) are useless for this discussion, because they heavily depend on historical factors, and on structure of human competition. 1. Number of titles? A rank, or title, is a totally arbitarary and artificial human-made concept. 2. Average years to reach a rank: Depends on definition of a rank, and on proportion of hobbyists / serious players, and many other factors. 3. How many years did it take for computer to beat the strongest human. Right, just which year will you take as a start for counting? Are you aware that computer chess began even before computers existed? 333ES (talk) 13:42, 12 April 2010 (UTC)[reply]
No, human arimaa is not far behind human chess. Computers do not progress the same way humans progress. Humans learn faster. Computers cannot easily learn. Humans reach a high level quickly and then improve slowly. Therefore, chess reached a high level of play hundreds of years ago. Since then improvements in chess have been slower on average about one point a year for the best players. So great chess games from 60 years ago are studied and considered great today. Great arimaa games of today will still be considered great and studied in 60 years. This is because the human brain has not changed in thousands of years. However, computers are faster and programs are smarter every year. Computers progress differently. They progressed steadily about 30 points a year in chess. Computer chess games from 60 years ago were bad. The same way computer arimaa games of today are bad. However, if there is same effort in computer arimaa then computer Arimaa will also be great. Computer arimaa is less successful than computer chess because the number of programmers and the number of years. Arimaa players are not far behind chess players because humans learn fast. Mschribr (talk) 02:52, 27 April 2010 (UTC)[reply]

Kasparov on computer chess[edit]

Here's a recent (February 2010) essay on computer chess by Garry Kasparov. I found it interesting so I figured you might too. Cheers, —ZeroOne (talk / @) 22:29, 30 March 2010 (UTC)[reply]

Kasparov’s article is interesting. I have respect for Kasparov as a chess player and as a humanitarian. However, he does not understand much about computers. Kasparov says “But there were other goals as well: to develop a program that played chess by thinking like a human, perhaps even by learning the game as a human does.” He does not believe that today’s computers that easily beat world chess champions are intelligent. He does not appreciate the achievement that a computer the size of a mobile phone can beat the world champion. He wants computers to play chess the way humans play chess. However, we do not know how humans play chess. We cannot look inside the mind and see how it works. Now that the computer has surpassed humans in chess ability then what is point in creating a computer to play more like human? So the computer will make human mistakes and lose. He should take comfort in the fact there are games where the humans are superior. There are old games that programmers have put a lot of effort where humans are better such as Shogi and Go. Maybe when computers are superior in Go, in about 30 years, then computers will think like a humans. However, I doubt it. Mschribr (talk) 22:21, 30 April 2010 (UTC)[reply]

Hiarcs is smarter then deep blue[edit]

128.237.86.18 changed “Pocket Fritz 4’s higher performance comes from being smarter and not from faster computers” to “Pocket Fritz 4’s higher performance comes from having a stronger evaluation function”. The higher performance of Hiarcs in Pocket Fritz 4’s compared to Deep Blue is more than just a stronger evaluation function. There are many improvements to Hiarcs to improve performance. Some of the improvements are pruning and sorting the moves that help Hiarcs pick better moves. To summarize the many improvements we can say Hiarcs is smarter then deep blue. I will revert the changes made by 128.237.86.18. Mschribr (talk) 23:17, 14 December 2010 (UTC)[reply]

You can't reliably say anything about any modern program compared to Deep Blue without it being original research. --IanOsgood (talk) 01:44, 15 December 2010 (UTC)[reply]
Of course, we can compare Deep Blue to modern programs such Pocket Fritz 4. We know in 1997 Deep Blue evaluated 200 million positions per second and achieved a performance rating 2862. We also know in 2009 Pocket Fritz 4 evaluated less than 20 thousand positions per second and achieved a performance rating 2898. Therefore, we can compare Deep Blue to Pocket Fritz 4 with the following statements. Pocket Fritz 4 played 12 years after Deep Blue played. Deep Blue evaluated more positions per second than Pocket Fritz 4 by multiple of 10 thousand. Pocket Fritz 4 achieved a performance rating 36 points higher than Deep Blue. All current programs have many more improvements than just improved evaluation function since 1997. Some of the improvements are pruning and sorting moves. These improvements make current programs such as Pocket Fritz 4 smarter than older programs such deep blue. Mschribr (talk) 02:19, 23 March 2011 (UTC)[reply]
No, Ian is correct. You can compare whatever you like, but in the text of a wikipedia article you need a reliable source with your conclusions, not just the data you are drawing inferences from. A wikipedia editor drawing nontrivial inferences = prohibited original research. I'm not even sure I know what it means to say that one chess program is "smarter" than another. Quale (talk) 05:37, 23 March 2011 (UTC)[reply]
Ok. What about saying, “Pocket Fritz 4’s higher performance comes from having a stronger evaluation function, discarding a larger number of inferior moves and better at deciding which moves to evaluate first”? Mschribr (talk) 07:49, 23 March 2011 (UTC)[reply]
What we would need is a reliable source to establish that that's true. Can anybody find one? If not, we can't really include it. Alzarian16 (talk) 09:00, 23 March 2011 (UTC)[reply]
Then what is the source for the statement “Pocket Fritz 4’s higher performance comes from a stronger evaluation function”? Mschribr (talk) 10:16, 23 March 2011 (UTC)[reply]

This article has way too much unsourced content at the moment, and that is indeed one example. It might not be a bad idea to remove the majority of it, but adding even more would just make things worse. Alzarian16 (talk) 10:44, 23 March 2011 (UTC)[reply]

Remove the majority of it, what is it? Do you agree the statement should be removed now? Mschribr (talk) 11:44, 23 March 2011 (UTC)[reply]
When I say "it", I mean the sections on Computers versus humans and Implementation issues, both of which have serious problems with lack of sourcing. But yes, I agree that the queried statement should be removed. Alzarian16 (talk) 13:04, 23 March 2011 (UTC)[reply]
Done. Thank you. Mschribr (talk) 13:31, 23 March 2011 (UTC)[reply]

Make the simplest Chess game[edit]

I want to discuss with you all if we can cooperate to make a sample chess? That may be very simple and goes up to very low depth... — Preceding unsigned comment added by 113.199.217.11 (talk) 13:24, 6 July 2011 (UTC)[reply]

List of top computer programs[edit]

Based on what criteria? On what evidence? Most of the lists out there are problematic.

I will start by marking it, but what I really want to do is delete it or have a new article on rating computer chess programs, both against each other and against humans. Lovingboth (talk) 08:24, 7 November 2012 (UTC)[reply]

Looking, this is much better in the Chess Engines article, which is linked to from here, so I'm deleting it. Lovingboth (talk) 16:23, 7 November 2012 (UTC)[reply]
Oooh, I was beaten to it. Thank you Sun Creator. Lovingboth (talk) 16:25, 7 November 2012 (UTC)[reply]
Good call, in my opinion. --Guy Macon (talk) 17:31, 7 November 2012 (UTC)[reply]

Are they good players?[edit]

This may seem a perverse question, since computer chess programs have defeated grandmasters, but there is a point to it. A 300-pound boxer of negligible skill could probably beat a 100-pound champion, but would not be considered a good boxer on that account. Do chess computers make brilliant moves? Do they develop innovations in strategy or tactics? Or do they just win by brute computational force?109.158.46.125 (talk) 16:25, 25 February 2013 (UTC)[reply]

The article does in fact touch upon this issue in the ending database section. The "tactics" available to machines to win some positions and draw others stem exactly from their superior computational power. So, if by brilliant moves you mean those that human mind can comprehend and encompass - then no, this is exactly what sets machines apart - their complexity is beyond human ability, and so the weight-class analogy is justified to some extent.91.203.168.156 (talk) 23:43, 10 November 2013 (UTC)[reply]

Do CPUs Have to Follow the Same Rules as Humans?[edit]

It seems to me that in these contests it is unfair because the computer CPU does not have to follow the same rules as humans do. Are humans allowed to carry the data that the CPUs have access to? (68.94.201.73 (talk) 03:59, 6 January 2015 (UTC))[reply]

Role of programmers[edit]

The article as it stands (on February 24 2015) tends to give the impression that the improved performance of computers is largely a matter of increased processing power, and the resulting increase in the number of moves ahead a computer can assess in the permitted time. This may be true (I'm no expert), but I wonder if the role of the programmers is being understated? Presumably programmers want to win, and will adjust their programs in the light of experience, to eliminate weaknesses and take advantage of developments in chess theory. For example, I guess (though it is not stated) that programs will incorporate the latest results of the standard openings and defenses, whether from human or machine play. Without such work, I do not see how they would stand a chance against a human grandmaster, who would crush them with superior knowledge of the openings.86.152.29.65 (talk) 14:15, 24 February 2015 (UTC) [Added to my own comment: I just found an article which states that: "A good Chess computer will have a large openings database. For more common openings such as the Ruy Lopez, Giuoco Piano, Queen's Gambit and the Sicilian Defense, a computer will have a database containing up to about white and black's first 20 moves of many variations of these openings. Therefore in a timed game, a computer will not need to use up any time at all if the human opponent sticks to the moves in the computer's database so it will give the computer much more time in the middle and end games to do larger and deeper searches" If this is true, I think it should be made clear in the article, which at present has a section on engame play but not openings.86.152.29.65 (talk) 14:22, 24 February 2015 (UTC)[reply]

To the first order, program strength is largely dependent on processing power. Programs running on 1GHz hardware will dominate programs on 200 MHz hardware. However, there have also been great strides in search, evaluation, and optimization since 1997. Some of it is from programmer creativity, and some of it is from automated parameter tuning.
Your second question on opening theory has a more subtle answer. There is almost no cross-fertilization between computer chess and GM opening theory, because the driving force behind computer improvement nowadays are rating lists instead of tournaments or play against humans. In the rating lists, programs are not allowed to use their own books, but instead use thousands of relatively even set starting positions in their matches. Computer chess products on the other hand all come with opening books of decent depth and breadth because customers expect it. But these days there is little in the way of book tuning to bring out the strengths of particular programs because there is little demand for it. --IanOsgood (talk) 01:17, 27 February 2015 (UTC)[reply]

search tree nodes vs board positions for estimating complexity[edit]

This sentence occurs in the section Solving chess: The difficulty in proving the latter lies in the fact that, while the number of board positions that could happen in the course of a chess game is huge (on the order of at least 1043 to 1047), it is hard to rule out with mathematical certainty the possibility that the initial position allows either side to force a mate or a threefold repetition after relatively few moves, in which case the search tree might encompass only a very small subset of the set of possible positions.

The sentence confounds nodes in the search tree with permutations of pieces on the chess board (i.e. "positions"). The number of nodes in the generalized search tree has been estimated as 10123, vastly larger than the number of positions possible. That's due to the possibility of chess positions being able to be reached by multiple sequences of moves, including the possibility of a three-fold repetition on any move. It's not maintainable that there's a possibility of one side or the other being able to force mate or a three-fold repetition after only a *small number* of moves, because either our chess masters or computer analyses would have discovered that exigency by now, if by *small number*, we mean for example, fewer than 20 moves per side (about half an average game). Half a game is not a *small number* of moves. The real reason that a "perfect play" game tree is very substantially smaller than the generalized game tree is that a perfect play tree won't contain any moves on the part of the program that fail to maintain the game-theoretic value of the position in which they're played. I.e, the program won't play (or need to search) any losing move if there are winning or drawing moves available, nor search or play any drawing move if winning moves are available. In general, moves that maintain the game-theoretic value of the position are a small fraction of the possible moves in any position. (May also be worth noting that the estimate of board permutations includes impossible and illegal positions, like both kings in check, pawns on the first rank, and demonstrably unreachable positions.)

The existing sentence needs to replaced and expanded, though the sense of it, that we're not going to define a perfect play search tree anytime soon, remains true. Sbalfour (talk) 00:28, 15 September 2015 (UTC)[reply]

"hundredths of a pawn"[edit]

"Evaluation functions typically evaluate positions in hundredths of a pawn, and consider material value along with other factors affecting the strength of each side."

Do they really round it like that? Why not use float instead? If the text is correct it should elaborate why hundredths of a pawn is typical.


If you want it[edit]

Chess pictogram.svgThis user plays computer chess.

IQ125 Template:User Computer Chess (talk) 09:54, 6 November 2016 (UTC)[reply]

takeback[edit]

Can a person beat a program if the person has no time controls and can use other programs and can take back as many moves as they want? I came here hoping to find answer — Preceding unsigned comment added by 47.72.78.79 (talk) 07:03, 22 October 2017 (UTC)[reply]

Why doesn't AI create its own positional evaluation and search techniques?[edit]

Why not let AI start with only knowledge of chess rules? From scratch? NiRaPa (talkcontribs) 07:27, 24 October 2017

CCRL merge[edit]

I wasn't around for the original CCRL deletion debate, but it seems to me that Chess engine#Ratings would be a better place to merge this content. (And while you're at it, merge yet another computer chess rating list article: Chess Engines Grand Tournament.) --IanOsgood (talk) 18:07, 28 December 2017 (UTC)[reply]

History section[edit]

The history section is a scribble, larded with trivia. There's no sense of what actually matters, or how events are related to each other by progressive historical accumulation of knowledge. Some of the most important events, like Northwestern Chess in the mid-late 70's, the publication of Glaurung (predecessor of Stockfish) in 2004, and Fruit in 2005, are omitted. Bullet points are a bad presentation anyway. This isn't a slide show, it's an encyclopedia article. I propose moving the current history section here, until it can be rewritten as narrative text.

For that, I propose 7 major historical periods:

  • 1 Pre-computer age <1950
  • 2. the selective search age 1950-1975
  • 3. the full width search era 1975-1980
  • 4. the chess machine era 1980-1991
  • 5. the microcomputer era 1980's-1997
  • 6. the computer supremacy era 1997-
  • 6a. the mobile device era 2009-
  • 7. The AI era 2017-

The precomputer age ends with the publication of Shannon's paper in 1949-50. The selective search era ends with Northwestern University Chess 4.5 in 1975. The machine era begins with Belle in 1979. The Microcomputer era begins with Fidelity devices in the mid-1980's, possibly earlier. The computer supremacy era begins with defeat of Garry Kasparov by Deep(er) Blue in 1997. The mobile devices era begins in 2009 with Pocket Fritz (Hiarcs 13) running on a smart phone winning a super-grandmaster tournament earning a performance rating of 2900. The AI era begins with AlphaGo/AlphaZero in Dec. 2017 decisively defeating the world's best Chess, Go and Shogi programs. Sbalfour (talk) 00:29, 1 September 2019 (UTC)[reply]

Sounds interesting, but I've never seen the computer chess eras divided like that. Reference? I also do not agree with your definition of "trivia" that you are removing. --IanOsgood (talk) 01:36, 11 September 2019 (UTC)[reply]
My division is purely organizational. If history is to be a large amount of narrative text, it has to be given a topic-by-topic focus ,and broken into readable sections. So I've proposed such. If it seems rather arbitrary, how arbitrary is it to divide the history into 6 one-decade periods, i.e. 1950-1960, etc? It doesn't give a sense of events. I've been around since the earliest days of computer chess, and I remember it in terms of events. If I don't know about, or don't remember something mentioned, it's probably not very notable.Sbalfour (talk) 19:16, 13 September 2019 (UTC)[reply]
By decades is a better organization for this in wikipedia. The article is not to be a personal monograph on computer chess as seen by one individual editor, but rather a compilation of what the best sources say about the subject. I don't think wikipedia is the best place for highly original organization or interpretation; conservatism is better here on those points. Along with IanOsgood I also do not agree with your definition of trivia that should be removed. That said, I think the editors doing the work should have a larger say in most of these issues than others who sit on the sidelines and nitpick. I appreciate that you are making the effort to try to improve the article. Quale (talk) 03:15, 14 September 2019 (UTC)[reply]
The timeline is rather large. Maybe split to a separate article? --IanOsgood (talk) 01:41, 11 September 2019 (UTC)[reply]
You are thinking along the same line as me. And maybe another article for Humans versus computers, and yet another for actual programs, rating lists, and computer-computer competitions. But we're wandering far afield here; a good article on computer chess should have a BRIEF summary of all of these. But I'm now concerned that the entire bulletized history was copied from chess.com, "Computers and chess - A History" [6]. They omitted a couple of the same events omitted from our list, and have two blunders that are also on our list. I'd extensively edited the history before I discovered the copyvio. I'm worried, and considering deleting the whole list. A bulletized list isn't a good or proper presentation for this kind of information; the major events should be incorporated into a narrative that gives cause-and-effect relationships between events. This article would readily flunk as a college history paper.Sbalfour (talk) 19:16, 13 September 2019 (UTC)[reply]

section Computer methods[edit]

(Formerly titled Implementation issues). My first issue is that Wikipedia should not be a how-to manual for implementing a chess program. The section is at the least overly emphasized by sheer size - it's the largest section in the article. My second concern is organizational - this info is best moved to Chess engine article, which is about how chess programs work, whereas this article is about computer chess in general. Most of the info here is a duplicate of info in articles like Board representation, Minimax, Alphabeta, Evaluation function, Quiescence search and others. And nearly the whole section implicitly assumes that all chess programs use a minimax/evaluation function model, which isn't wholly true today, and certainly hasn't been true throughout the history. I'd propose condensing it, merging it into existing articles, or splitting it out into its own article.Sbalfour (talk) 19:36, 13 September 2019 (UTC)[reply]

There's a bigger concern: as a game, computer chess is a paradigm. How do we computationally represent knowledge? How do we know what knowledge is useful? How do we know search can find the best move, or even a good move? Why do we need a tree? Can't we just "inspect" the position? Chess masters can give an approximate evaluation of a position in seconds. Why can't the computer? Evidently chess masters use a kind of pattern-matching process to do that. What are those patterns and why can't they be used by a program? Why do we need to encode knowledge at all? Can't the computer learn on it's own? Then we have genetic programming, programs that evolve themselves to play the game. What about those? These are not trivial questions, and while some have been resolved today, some haven't, and in the 1950's we didn't have answers to any of them. The article should at some length discuss what we have today and why we have it, rather than "here's how one works". We've undergone a major paradigm shift starting in Dec. 2017 with the rollout of AlphaZero; I suspect pretty soon, we'll see most chess programs be Monte-carlo tree searchers. We shouldn't ignore alternative approaches left behind. Sbalfour (talk) 19:50, 13 September 2019 (UTC)[reply]

Playing strength versus computer speed[edit]

The blank statement that doubling the processor speed results in an increase of X ELO points in playing strength is bad enough to be outright false. The statement is taken out of context. There's actually a complex and instructive relationship between processor speed and knowledge encoded in the evaluation function. Programs that are already poor in knowledge benefit less than programs with better knowledge. Programs with low ratings might increase as much as a rating class (200 ELO points) from a doubling in processor speed. Rather than processor speed, the appropriate measure here is an increase of 1 ply in search depth (a function of processor speed). At ELO 600 (rank beginner), an increase of one ply in search depth might result in a gain of 350 rating points. At ELO 3450 (Stockfish 10), an increase of 1 ply in search depth might add only a few ELO points to its playing strength. The performance of a chess program is related to the resources including processor speed available to it; the other resources are data storage for opening book and endgame table bases, memory for the transposition table, and the more important but intangible quantity of knowledge encoded in its evaluation function.

What we really might want to know is, to improve a program's performance, should we spend more money on hardware (like a CPU with more cores), or more effort on the evaluation function? And the answer is, "it depends". There are no ready metrics for quantifying the utility or quantity of knowledge in the evaluation function, but for most modern programs, that's the critical factor, not processor speed. Sbalfour (talk) 20:55, 13 September 2019 (UTC)[reply]

El Ajedrecista a hoax?[edit]

According to the "El Ajedrecista" article, it is an authentic chess playing machine unlike "the Turk". But on this article, it says that "El Ajedrecista" is a hoax. How is it a hoax? --59.19.11.45 (talk) 00:58, 15 May 2020 (UTC)[reply]

Removed. Nice catch. codl (talk) 01:57, 15 May 2020 (UTC)[reply]

"Computer Xiangqi" listed at Redirects for discussion[edit]

Information.svg A discussion is taking place to address the redirect Computer Xiangqi. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 March 4#Computer Xiangqi until a consensus is reached, and readers of this page are welcome to contribute to the discussion. signed, Rosguill talk 17:19, 4 March 2021 (UTC)[reply]