Mr8bit commited on
Commit
939396a
·
verified ·
1 Parent(s): 6fde9c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -7
README.md CHANGED
@@ -1,10 +1,100 @@
1
  ---
2
- title: README
3
- emoji: 🏢
4
- colorFrom: green
5
- colorTo: purple
6
- sdk: static
7
- pinned: false
 
 
 
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ library_name: tensoraerospace
5
+ tags:
6
+ - aerospace
7
+ - reinforcement-learning
8
+ - control
9
+ - gymnasium
10
+ - tensorflow
11
+ - pytorch
12
  ---
13
 
14
+ # TensorAeroSpace Aerospace RL & Control Organization
15
+
16
+ Realistic aerospace environments and modern RL/control algorithms for training flight control systems. Open‑source, MIT‑licensed.
17
+
18
+ - Website & Docs: https://tensoraerospace.readthedocs.io/
19
+ - GitHub: https://github.com/TensorAeroSpace/TensorAeroSpace
20
+ - PyPI: https://pypi.org/project/tensoraerospace/
21
+
22
+ ## What we build
23
+
24
+ - Environments: F‑16, B747, X‑15, rockets, satellites (state‑space models, linear/linearized)
25
+ - Algorithms: IHDP, DQN, A3C/A2C‑NARX, PPO, SAC, DDPG, GAIL, PID, MPC
26
+ - Tooling: benchmarking, metrics, examples, docs, and lessons
27
+
28
+ ## Featured model(s)
29
+
30
+ - IHDP agent for F‑16 longitudinal alpha tracking — TensorAeroSpace/ihdp‑f16
31
+
32
+ Quick use (IHDP):
33
+
34
+ ```python
35
+ import os
36
+ import numpy as np
37
+ import gymnasium as gym
38
+ from tensoraerospace.agent.ihdp.model import IHDPAgent
39
+
40
+ agent = IHDPAgent.from_pretrained(
41
+ "TensorAeroSpace/ihdp-f16", access_token=os.getenv("HF_TOKEN")
42
+ )
43
+
44
+ env = gym.make(
45
+ "LinearLongitudinalF16-v0",
46
+ number_time_steps=2002,
47
+ initial_state=[[0],[0],[0]],
48
+ reference_signal=np.zeros((1, 2002)),
49
+ use_reward=False,
50
+ state_space=["theta","alpha","q"],
51
+ output_space=["theta","alpha","q"],
52
+ control_space=["ele"],
53
+ tracking_states=["alpha"],
54
+ )
55
+
56
+ obs, info = env.reset()
57
+ ref = env.unwrapped.reference_signal
58
+ for t in range(ref.shape[1]-3):
59
+ u = agent.predict(obs, ref, t)
60
+ obs, r, terminated, truncated, info = env.step(np.array(u))
61
+ if terminated or truncated:
62
+ break
63
+ ```
64
+
65
+ ## Install
66
+
67
+ ```bash
68
+ pip install tensoraerospace
69
+ ```
70
+
71
+ ## Save & share your models
72
+
73
+ All major agents support saving locally and pushing to the Hub:
74
+
75
+ ```python
76
+ from tensoraerospace.agent.sac.sac import SAC
77
+
78
+ agent = SAC(env)
79
+ agent.train(num_episodes=1)
80
+
81
+ # Save + push to Hub
82
+ folder = agent.save_pretrained("./checkpoints")
83
+ agent.push_to_hub("<org>/<model-name>", base_dir="./checkpoints", access_token="hf_...")
84
+ ```
85
+
86
+ ## Learn more
87
+
88
+ - Quick start & examples: ./example/
89
+ - English docs home: ./docs/en/index.md
90
+ - Russian docs home: ./docs/ru/index.md
91
+
92
+ ## License
93
+
94
+ MIT — free for academia and industry.
95
+
96
+ ---
97
+
98
+ Source: TensorAeroSpace team org page on the Hub — https://huggingface.co/TensorAeroSpace
99
+
100
+