Northern Premier Division stats & predictions
No football matches found matching your criteria.
Upcoming Thrills: Northern Premier Division England Matches Tomorrow
Welcome to the ultimate guide for all things football in the Northern Premier Division England. Whether you're a die-hard fan or just love the excitement of a good match, you've come to the right place. With tomorrow's lineup packed with thrilling encounters, we've got you covered with expert predictions and betting insights. Get ready for a day of high-octane football action!
Matchday Preview
The Northern Premier Division is renowned for its competitive spirit and unpredictable outcomes. Tomorrow, fans will witness some of the most eagerly anticipated fixtures of the season. Let's dive into the details of each match, offering you expert insights and predictions to enhance your viewing experience.
Match 1: Barrow vs. Altrincham
This clash promises to be a tactical battle between two teams with contrasting styles. Barrow, known for their solid defense, will face Altrincham, who are on an impressive scoring streak. Our experts predict a tight game, with Altrincham slightly favored to clinch a narrow victory.
- Barrow: Known for their resilience and strategic gameplay.
- Altrincham: On fire with their offensive prowess.
- Prediction: Altrincham 1-0 Barrow
Match 2: Fylde vs. Blyth Spartans
Fylde and Blyth Spartans are set to deliver an explosive encounter. Both teams have been in excellent form, making this a must-watch fixture. Expect goals from both ends as these teams battle it out for crucial points.
- Fylde: Strong home advantage and attacking flair.
- Blyth Spartans: Resilient away from home with a knack for comebacks.
- Prediction: Fylde 2-1 Blyth Spartans
Match 3: Macclesfield Town vs. North Ferriby United
Macclesfield Town aims to continue their unbeaten run against North Ferriby United. With both teams desperate for points, this match is expected to be a tightly contested affair.
- Macclesfield Town: Consistent performers with a strong squad depth.
- North Ferriby United: Fighting hard to climb up the table.
- Prediction: Macclesfield Town 2-1 North Ferriby United
Betting Insights and Tips
Betting on football can be as thrilling as watching the game itself. Here are some expert tips and predictions to help you make informed bets on tomorrow's matches:
- Total Goals Over/Under: Expect a high-scoring affair in the Fylde vs. Blyth Spartans match. Bet on over 2.5 goals.
- First Goal Scorer: For the Barrow vs. Altrincham match, consider backing Altrincham's star striker as the first goal scorer.
- Correct Score: In the Macclesfield Town vs. North Ferriby United match, a correct score bet of 2-1 in favor of Macclesfield Town could be lucrative.
In-Depth Team Analysis
Barrow AFC
Barrow has been a formidable force this season, thanks to their disciplined defense and strategic gameplay. Their ability to absorb pressure and hit on the counter has been key to their success. However, they will need to step up their game against an in-form Altrincham side.
- Key Player: Look out for their midfield maestro who orchestrates play from the heart of the park.
- Tactic: Expect a cautious approach with quick transitions on counter-attacks.
Altrincham FC
Altrincham's recent form has been nothing short of spectacular. Their attacking trio has been causing havoc for opposition defenses, making them one of the most feared teams in the division.
- Key Player: Their dynamic forward line is crucial, with one player leading the scoring charts.
- Tactic: High pressing and aggressive forward play will be their strategy against Barrow.
Fylde FC
Fylde's home form has been impressive, with fans providing an electric atmosphere that boosts the team's performance. Their attacking flair and solid defense make them a tough opponent for any visiting team.
- Key Player: Their creative midfielder is pivotal in setting up goal-scoring opportunities.
- Tactic: Expect an expansive style of play with plenty of width provided by their wingers.
Blyth Spartans FC
Blyth Spartans have shown remarkable resilience this season, often pulling off stunning comebacks. Their ability to grind out results makes them a dangerous opponent, especially away from home.
- Key Player: Their tenacious defender is crucial in breaking up opposition attacks.
- Tactic: A solid defensive setup with quick transitions could be their approach against Fylde.
Macclesfield Town FC
Macclesfield Town's unbeaten streak is a testament to their consistency and team spirit. Their balanced squad allows them to adapt to different game situations effectively.
- Key Player: A versatile forward who can play across multiple positions adds depth to their attack.
- Tactic: A mix of possession-based play and direct attacks will be key against North Ferriby United.
North Ferriby United's determination to climb up the table is evident in their recent performances. They have shown they can compete with top teams when they bring their A-game.
- Key Player: Their captain leads by example both on and off the pitch, inspiring his teammates.
- Tactic: A compact defensive unit with quick counter-attacks could be their strategy against Macclesfield Town.
Past Performances and Trends
Analyzing past performances can provide valuable insights into how teams might perform in upcoming matches. Here are some trends and statistics from previous encounters between these teams:
- Last Five Matches - Barrow vs. Altrincham:
- Average goals per match: 1.8
- Last three matches ended in draws or narrow victories by one goal.
- Last Five Matches - Fylde vs. Blyth Spartans:
- Average goals per match: 2.6
- Fylde have won three out of five encounters by two or more goals.
- Last Five Matches - Macclesfield Town vs. North Ferriby United:
- Average goals per match: 2.0
- Macclesfield Town have won four out of five matches against North Ferriby United at home.mirela-cristea/rl-project<|file_sep|>/src/agents/agent.py import os import numpy as np import torch import torch.nn as nn from torch.distributions import Categorical from src.utils import check_cuda class Agent: def __init__(self, model, discount_factor=0.99, epsilon=0., max_epsilon=1., min_epsilon=0., epsilon_decay=0., entropy_coeff=0., entropy_decay=0., learning_rate=0., reward_scaling=1., grad_clip=None, optimiser='adam', verbose=False): """ Initialise agent. :param model: Model class used by agent. :param discount_factor: Discount factor used by agent. :param epsilon: Exploration rate used by agent. :param max_epsilon: Maximum value that epsilon can take. :param min_epsilon: Minimum value that epsilon can take. :param epsilon_decay: Epsilon decay rate. :param entropy_coeff: Entropy coefficient used by agent. :param entropy_decay: Entropy coefficient decay rate. :param learning_rate: Learning rate used by agent. :param reward_scaling: Reward scaling used by agent. :param grad_clip: Gradient clipping value used by agent. :param optimiser: Optimiser used by agent (adam or rmsprop). :param verbose: Verbose mode. """ self.model = model self.discount_factor = discount_factor self.epsilon = epsilon self.max_epsilon = max_epsilon self.min_epsilon = min_epsilon self.epsilon_decay = epsilon_decay self.reward_scaling = reward_scaling self.grad_clip = grad_clip # Check if GPU available self.device = torch.device('cuda' if check_cuda() else 'cpu') # Define policy network (if needed) if type(model) != nn.Module: self.policy_network = model(self.device).to(self.device) self.policy_network.train() if optimiser == 'adam': self.optimiser = torch.optim.Adam(self.policy_network.parameters(), lr=learning_rate) elif optimiser == 'rmsprop': self.optimiser = torch.optim.RMSprop(self.policy_network.parameters(), lr=learning_rate) else: raise ValueError('Unrecognised optimiser type') self.criterion = nn.MSELoss() # Entropy coefficient (for exploration) self.log_entropy_coeff = torch.log(torch.tensor(entropy_coeff).to(self.device)) # Entropy coefficient decay rate self.log_entropy_coeff_decay = torch.tensor(entropy_decay).to(self.device) # Verbose mode self.verbose = verbose def update_epsilon(self): """ Update epsilon according to decay rule. :return: """ if self.epsilon > self.min_epsilon: self.epsilon *= (1 - self.epsilon_decay) def update_entropy_coeff(self): """ Update entropy coefficient according to decay rule. :return: """ # Entropy coefficient (for exploration) if torch.exp(self.log_entropy_coeff) > self.min_epsilon: self.log_entropy_coeff -= self.log_entropy_coeff_decay def get_action(self, state): """ Get action given state. :param state: :return: """ raise NotImplementedError def save_model(self): """ Save model. :return: """ raise NotImplementedError def load_model(self): """ Load model. :return: """ raise NotImplementedError <|file_sep|># Reinforcement Learning Project ### Table of Contents * [Project Overview](#project-overview) * [Project Structure](#project-structure) * [Running Experiments](#running-experiments) * [Results](#results) * [Notes](#notes) ## Project Overview This project contains experiments using reinforcement learning techniques such as Q-learning, Policy Gradient Methods (REINFORCE), Actor-Critic methods (A2C) and Proximal Policy Optimization (PPO) on classic control tasks from OpenAI Gym. ### Environment The environment used is OpenAI Gym. ### Agents The agents used are: * **Q-learning**: Uses Q-learning algorithm implemented using neural networks (DQN). * **REINFORCE**: Uses REINFORCE algorithm implemented using neural networks. * **A2C**: Uses A2C algorithm implemented using neural networks. * **PPO**: Uses PPO algorithm implemented using neural networks. ## Project Structure rl-project/ ├── README.md <- The top-level README for developers using this project. ├── LICENSE <- License file ├── .gitignore <- Specifies files not to commit. ├── requirements.txt <- File containing all dependencies required for running code. ├── src <- Source code directory. │ ├── agents <- Contains source code for agents. │ ├── envs <- Contains source code for environments. │ ├── models <- Contains source code for models used by agents. │ ├── utils <- Contains utility functions used throughout codebase. │ └── visualise <- Contains source code for visualising results. └── experiments <- Directory containing experiments configuration files. ## Running Experiments To run experiments: 1) Create an experiment configuration file (`.json`) inside `experiments` directory: Example configuration file: { "agent": "ppo", "environment": "CartPole-v0", "seed": null, "save_model": false, "max_episodes": null, "max_steps": null, "render": false, "save_replay": false, "replay_path": null, "log_interval": null, "checkpoint_interval": null, "discount_factor": null, "epsilon": null, "max_epsilon": null, "min_epsilon": null, "epsilon_decay": null, "entropy_coeff": null, "entropy_decay": null, "learning_rate": null, "reward_scaling": null, "grad_clip": null, "optimiser": "adam", "verbose": false } **Note**: When running multiple experiments sequentially on different environments (e.g., CartPole-v0 then MountainCarContinuous-v0), make sure that `seed` parameter is set to `null`. This ensures that each environment has different random seed so that episodes are not identical across environments. 2) Run experiment using command: python src/main.py --config-file path/to/config/file.json [--device DEVICE] **Note**: To run experiment on GPU use `DEVICE=cuda`. If `DEVICE` argument is not specified then experiment will run on CPU. ## Results Results can be visualised using TensorBoard or Matplotlib. ### TensorBoard TensorBoard can be launched using command: tensorboard --logdir=./experiments/logs --port=6006 --host=localhost Once launched TensorBoard will automatically open your default browser where you can visualise results. ### Matplotlib Results can also be visualised using Matplotlib script found at `src/visualise/plot.py`. Example usage: python src/visualise/plot.py --experiment-name name_of_experiment --device cpu | cuda [--show] ## Notes If any errors occur during training please ensure that all dependencies required are installed as specified in `requirements.txt` file.<|file_sep|># -*- coding: utf-8 -*- """ Created on Wed Nov 13 12:17:57 2019 @author: [email protected] """ import os import json import argparse import random import datetime as dt def parse_args(): parser = argparse.ArgumentParser(description='Experiment Configuration') parser.add_argument('--config-file', dest='config_file', type=str, required=True) parser.add_argument('--device', dest='device', type=str) args = parser.parse_args() return args def get_config(args): config_file_path = args.config_file # Read config file config_file_path_dirname = os.path.dirname(config_file_path) try: config_file_data = json.load(open(config_file_path)) print("Successfully loaded config file") # Add config file name (without extension) config_file_data['config_file'] = os.path.splitext(os.path.basename(config_file_path))[0] # Add time config_file_data['time'] = dt.datetime.now().strftime("%Y%m%d_%H%M%S") # Add device if args.device is None: config_file_data['device'] = 'cpu' else: config_file_data['device'] = args.device return config_file_data except Exception as e: print(e) print("Error loading config file") exit() if __name__ == '__main__': args = parse_args() get_config(args)<|repo_name|>mirela-cristea/rl-project<|file_sep|>/src/utils.py import os import numpy as np import torch def set_seed(seed): """ Set seed. :param seed: """ np.random.seed(seed) torch.manual_seed(seed) def check_cuda(): """ Check if GPU available. :return: """ return torch.cuda.is_available() def check_grads(model): """ Check gradients. :param model: """ params_with_nonzero_grads_count = sum(p.grad.data.ne(0).sum().item() >0 for p in model.parameters() if p.grad is not None) params_count = sum(p.numel() for p in model.parameters()) return params_with_nonzero_grads_count / params_count def mkdir(path): """ Make directory. :param path: """ try: os.makedirs(path) print(f'Made directory {path}') except OSError as e: print(f'Failed creating directory {path}') print(e) def save_checkpoint(model_state_dict, optimiser_state_dict, current_episode, episode_rewards, episode_lengths, checkpoint_path): checkpoint_dict = {'model_state_dict': model_state_dict, 'optimiser_state_dict': optimiser_state_dict, 'current_episode': current_episode, 'episode_rewards': episode_rewards, 'episode_lengths': episode_lengths} torch.save(checkpoint_dict, checkpoint_path) def load_checkpoint(checkpoint_path, model, optimiser): checkpoint_dict = torch.load(checkpoint_path)