1
18
2
4
3
22
4
15
5
7

About this course

Who is this course for?

You: Are a beginner in the field of machine learning or deep learning or AI and would like to learn PyTorch.

This course: Teaches you PyTorch and many machine learning, deep learning and AI concepts in a hands-on, code-first way.

If you already have 1-year+ experience in machine learning, this course may help but it is specifically designed to be beginner-friendly.

What are the prerequisites?

  • 3-6 months coding Python.
  • At least one beginner machine learning course (however this might be able to be skipped, resources are linked for many different topics).
  • Experience using Jupyter Notebooks or Google Colab (though you can pick this up as we go along).
  • A willingness to learn (most important).
6
74
7
16

Andres Vourakis writes:

Data Scientist Handbook 2024

Curated resources (Free & Paid) to help data scientists learn, grow, and break into the field of data science.

Even though there are hundreds of resources out there (too many to keep track of), I will try to limit them to a maximum of 5 per category to ensure you get the most valuable and relevant resources out there, plus, the whole point of this repository is to help you avoid getting overwhelmed by too many choices. This way you can focus less time researching and more time learning.

FAQs

  • How is curation done? Curation is based on thorough research, recommendations from people I trust, and my years of experience as a Data Scientist.
  • Are all resources free? Most resources here will be free, but I will also include paid alternatives if they are truly valuable to your career development. All paid resources include the symbol 💲.
  • How often is the repository updated? I plan to come back here as often as possible to ensure all resources are still available and relevant and also to add new ones.
8
3

July 17, 2024

Allen B. Downey writes:

Elements of Data Science is an introduction to data science for people with no programming experience. My goal is to present a small, powerful subset of Python that allows you to do real work with data as quickly as possible.

Part 1 includes six chapters that introduce basic Python with a focus on working with data.

Part 2 presents exploratory data analysis using Pandas and empiricaldist — it includes a revised and updated version of the material from my popular DataCamp course, “Exploratory Data Analysis in Python.”

Part 3 takes a computational approach to statistical inference, introducing resampling method, bootstrapping, and randomization tests.

Part 4 is the first of two case studies. It uses data from the General Social Survey to explore changes in political beliefs and attitudes in the U.S. in the last 50 years. The data points on the cover are from one of the graphs in this section.

Part 5 is the second case study, which introduces classification algorithms and the metrics used to evaluate them — and discusses the challenges of algorithmic decision-making in the context of criminal justice.

This project started in 2019, when I collaborated with a group at Harvard to create a data science class for people with no programming experience. We discussed some of the design decisions that went into the course and the book in this article.

Read Elements of Data Science in the form of Jupyter notebooks.

9
2
10
31
11
8
submitted 4 months ago* (last edited 4 months ago) by ericjmorey@programming.dev to c/machine_learning@programming.dev
12
7
13
8
14
11

Video description:

We reproduce the GPT-2 (124M) from scratch.

This video covers the whole process:

First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations.

Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar.

15
3
16
4

Bayman, Emine Ozgur PhD*; Dexter, Franklin MD, PhD, FASA†. Multicollinearity in Logistic Regression Models. Anesthesia & Analgesia 133(2):p 362-365, August 2021. | DOI: 10.1213/ANE.0000000000005593

17
6

cross-posted from: https://lemmy.one/post/13942290

Abstract: We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields solutions that are comparable or superior to state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions outperform these models in aspects such as runtime and data efficiency, interpretability, and generalizability.

18
3

Original post on r/learnmachinelearning

19
7

Apr 18, 2022 | Tarique Anwar Writes:

The main reason for ReLu being used is that it is simple, fast, and empirically it seems to work well.

But with the emergence of Transformer based models, different variants of activation functions and GLU have been experimented with and do seem to perform better. Some of them are:

  • GeLU²
  • Swish¹
  • GLU³
  • GEGLU⁴
  • SwiGLU⁴

We will go over some of these in detail but before that let’s see where exactly are these activations utilized in a Transformer architecture.

Read Activation function and GLU variants for Transformer models

20
6

Summary

Activation functions are crucial in neural networks, introducing non-linearity and enabling the modeling of complex patterns across varied tasks. This guide delves into the evolution, characteristics, and applications of state-of-the-art activation functions, illustrating their role in enhancing neural network performance. It discusses the transition from classic functions like sigmoid and tanh to advanced ones such as ReLU and its variants, addressing challenges like the vanishing gradient problem and the dying ReLU issue. Concluding with practical heuristics for selecting activation functions, the article emphasizes the importance of considering network architecture and task specifics, highlighting the rich diversity of activation functions available for optimizing neural network designs.

21
20
22
3
23
5

Dawn Wages writes:

Python Data Science Day is a full day of 25 min and 5 min community contributed content March 14th, 2024 streaming on the VS Code YouTube channel.

24
4

Start 2024 with a new goal: become an expert with Python in the cloud. Join us this quarter as we challenge ourselves with Python, Machine Learning and Data Science.

7 hr 1 min | 10 Modules

25
14

cross-posted from: https://lemmy.ml/post/13088944

view more: next ›

Machine Learning

478 readers
2 users here now

A community for posting things related to machine learning

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS