Cognitive computing endeavors to construct models that emulate brain functions, which can be explored through electroencephalography (EEG). Developing precise and robust EEG classification models is crucial for advancing cognitive computing. Despite the high accuracy of supervised EEG classification models, they are constrained by labor-intensive annotations and poor generalization. Self-supervised models address these issues but encounter difficulties in matching the accuracy of supervised learning. Three challenges persist: 1) capturing temporal dependencies in EEG; 2) adapting loss functions to describe feature similarities in self-supervised models; and 3) addressing the prevalent issue of data imbalance in EEG. This study introduces the DreamCatcher Network (DCNet), a self-supervised EEG classification framework with a two-stage training strategy. The first stage extracts robust representations through contrastive learning, and the second stage transfers the representation encoder to a supervised EEG classification task. DCNet utilizes time-series contrastive learning to autonomously construct representations that comprehensively capture temporal correlations. A novel loss function, SelfDreamCatcherLoss, is proposed to evaluate the similarities between these representations and enhance the performance of DCNet. Additionally, two data augmentation methods are integrated to alleviate class imbalances. Extensive experiments show the superiority of DCNet over the current state-of-the-art models, achieving high accuracy on both the Sleep-EDF and HAR datasets. It holds substantial promise for revolutionizing sleep disorder detection and expediting the development of advanced healthcare systems driven by cognitive computing.