Developing Table Tennis Strokes Skill through Learning Method, Feedback and Agility
- Jonni Siahaan
Abstract
This paper presents a discussion on the learning method, feedback and agility needed in the process of developing table tennis strokes skills. The learning method used is the random learning method, which is categorized into regular and irregular. The regular method is developing the table tennis service, drive, smash and lob sequentially and the irregular method is freely choosing the strokes. The feedback, divided into direct feedback through demonstration and direct feedback using the words right or wrong. The agility is categorized into poor, average and good agility. This article gives an overview of the table tennis strokes skill development through the random learning method, using direct feedback and agility. Data were analyzed using ANOVA Scheffe’s test, with the level of significant is set at ? <0.05, and is concluded as follow: 1) the regular random learning method (RLM-r) is significantly different than the irregular random learning method (RLM-ir) 2) the direct feedback using demonstration (DF-d) is significantly different than the direct feedback using right or wrong (DF-rf) 3) the good agility (H-a) is significantly different than average agility (M-a) and poor agility (L-a) 4) there is an interaction between random learning method (RLM), and direct feedback (DF) with agility 5) there is an interaction between random learning method (RLM) with agility 6) there is an interaction between direct feedback (DF) with agility 7) there is an interaction between random learning method (RLM), direct feedback (DF) with agility 8) the combination between regular random learning method (RLM-r) with direct feedback through demonstration (DF-d) with good agility (H-a) is significantly different and is better than the combination of regular random learning method (RLM-r) with direct feedback using right or wrong (DF-rf), with high agility (H-a) 9) the combination between regular random learning method (RLM-r) with direct feedback using demonstration (DF-d) with poor agility (L-a) is not significantly different than the combination of regular random learning method (RLM-r) with direct feedback using right or wrong(DF-rf), with poor agility (L-a) 10) the combination between irregular random learning method (RLM-ir) with direct feedback using demonstration (DF-d) with good agility (H-a) is not significantly different than the combination of irregular random learning method (RLM-r), direct feedback using right or wrong (DF-rf), with good agility (H-a) 11) the combination between irregular random learning method (RLM-ir), direct feedback using demonstration (DF-d), with poor agility (L-a) is not significantly different than the combination between irregular random learning method (RLM-r), direct feedback using right or wrong (DF-rf), with poor agility (L-a).- Full Text: PDF
- DOI:10.5539/ass.v10n5p63
This work is licensed under a Creative Commons Attribution 4.0 License.
Journal Metrics
Index
- Academic Journals Database
- BASE (Bielefeld Academic Search Engine)
- Berkeley Library
- CNKI Scholar
- COPAC
- EBSCOhost
- EconBiz
- Elektronische Zeitschriftenbibliothek (EZB)
- Excellence in Research for Australia (ERA)
- Genamics JournalSeek
- GETIT@YALE (Yale University Library)
- Harvard Library
- IBZ Online
- IDEAS
- Infotrieve
- JournalTOCs
- LOCKSS
- MIAR
- Mir@bel
- NewJour
- OAJI
- Open J-Gate
- PKP Open Archives Harvester
- Publons
- Questia Online Library
- RePEc
- SafetyLit
- SHERPA/RoMEO
- Standard Periodical Directory
- Stanford Libraries
- Technische Informationsbibliothek (TIB)
- The Keepers Registry
- Universe Digital Library
- VOCEDplus
- WorldCat
Contact
- Jenny ZhangEditorial Assistant
- ass@ccsenet.org