All Environments
custom hard

AdaptiveResourceGatheringWithExperienceTracking

RESEARCH HYPOTHESIS: In dynamically changing environments where the environment state transitions after each agent action, agents that incorporate self-observation data (including recent action sequences, reward histories, environment change patterns, and strategy effectiveness metrics) into their observation space will demonstrate superior adaptation performance compared to agents that only observe external environment state SUB-HYPOTHESES: H1: Agents with access to their own recent action history and corresponding rewards will adapt faster to dynamic environment changes than agents without this self-observation capability; H2: Agents that track environment change patterns (how the environment responds to their actions) will develop more robust strategies in dynamic settings; H3: Agents that monitor their own strategy effectiveness (progress toward goals over recent timesteps) will avoid repeating ineffective action sequences and converge to better policies USER'S ORIGINAL IDEA: dynamic reinforcement learning environment and adaptive agent: environment that dynamically change in training, so the agent inside should adapt the new situation every time, everything in the environment changes after each action. the hypothesis: when the environment is dynamic and changes after agent's each action, thus the agent must store the experience and must do self observation while taking actions and also must store the self observation experience and learn from these experiences in order to adapt the dynamically changing environment. Senin hipotezim 3 katmanlı: Environment dinamik değişmeli → ✅ B Agent deneyimi depolamalı (experience storage) → agent'ın "bu durumda şu oldu, ortam böyle değişti" bilgisini açıkça saklaması. Agent self-observation yapmalı → Agent sadece dış ortamı gözlemliyor (duvarlar, hedef, kaynaklar). Ama kendi iç durumunu — "son 5 adımda ne yaptım, ne oldu, ortam nasıl değişti, stratejim işe yaradı mı" — gözlemlesin tezin env kodunda olması gereken şey: Observation space'e agent'ın kendi deneyim özeti eklenmeli: Son N adımdaki action dizisi (ne yaptım) Son N adımdaki reward dizisi (ne oldu) Ortam değişim vektörü (ortam nasıl değişti — duvar toggle oranı, hedef ne kadar kaydı) Strateji etkinliği (son K adımda hedefe yaklaştım mı uzaklaştım mı) Deneyim kalıpları (bu tür ortam değişiminde hangi stratejim işe yaradı) Yani agent'ın observation'ı sadece "dış dünya şu an nasıl" değil, "ben ne yaptım, ne oldu, ortam nasıl tepki verdi" bilgisini de içermeli CRITICAL: The environment MUST implement ALL aspects of the hypothesis, including any agent-side mechanisms (self-observation, experience storage, adaptive behavior tracking) as part of the OBSERVATION SPACE and REWARD FUNCTION. Do not just build the environment dynamics — also embed the agent-side requirements into the env's observation/reward design. ENVIRONMENT SPECIFICATION: OBSERVATION SPACE: 89-dimensional vector containing: (1) Agent position (x,y) [2 dims], (2) Resource locations (5 resources, each with x,y,type,quantity) [20 dims], (3) Agent inventory (4 resource types) [4 dims], (4) Market prices for each resource type [4 dims], (5) Last 8 actions (0=move_up,1=move_down,2=move_left,3=move_right,4=gather,5=sell) [8 dims], (6) Last 8 rewards [8 dims], (7) Resource regeneration pattern: quantity change per resource over last 5 timesteps [25 dims], (8) Price volatility: price change per resource over last 4 timesteps [16 dims], (9) Gathering efficiency: resources gathered per gathering action over last 6 attempts [6 dims], (10) Market timing effectiveness: profit per sell action over last 4 sales [4 dims], (11) Exploration diversity: number of different resource types interacted with in last 10 actions [1 dim], (12) Strategy consistency score: correlation between action sequences and positive rewards over last 15 actions [1 dim]. ACTION SPACE: Discrete(6) - move in 4 directions, gather resource at current location, sell inventory at market. TRANSITION DYNAMICS: After each action: (a) Resource quantities change by ±20-50% with 60% probability, (b) Resource types may change (wood→stone, etc.) with 25% probability, (c) Market prices fluctuate ±10-30% based on agent's recent selling behavior, (d) New resources spawn randomly while others deplete. REWARD FUNCTION: +value for selling resources (based on quantity×price), +2 for gathering rare resources, -0.05 per timestep, +3.0 bonus for selling when prices are in top 25% of recent history (market timing), +1.5 bonus for maintaining diverse resource portfolio, -1.0 penalty for attempting same failed gathering sequence (last 4 actions) that previously yielded zero resources. EPISODE TERMINATION: 300 timesteps elapsed, total profit exceeds 100 units, or agent profit below -20 (bankruptcy). AGENT-SIDE REQUIREMENTS: Agent must track market timing patterns, resource availability changes, and maintain experience buffer linking action sequences to profitability outcomes.

Observation Space

Box(shape=?)

Action Space

Discrete(shape=?)

Reward

see spec

AdaptiveResourceGatheringWithExperienceTracking | kualia.ai