Steps Towards Continual Learning

Some elements of Con+nual Learning •  Learn new Skills (Op0ons) •  Learn new Knowledge (Op0on-Condi0onal Predic0ons) •  Reuse / Incorporate learned Skills and Knowledge to learn more complex Skills and Knowledge [scalable, w/o catastrophic forgeFng] •  Intrinsic Mo0va0on to drive experience in the absence of (or perhaps more accurately, too long a delay in) Extrinsic Rewards •  More experienced agents (humans) as a par0cularly salient target of Intrinsic Mo0va0on (imita0on, demonstra0on, aLen0on, etc.)

•  Increasingly competent agent over 0me (not just in terms of Knowledge and Skills it has but also in terms at how well it does at accumula0ng Extrinsic Rewards) [Learning to Learn / meta-learning] RL-centered view of AI

A Child’s Playroom Domain (NIPS 2004) (An ancient Con+nual Learning Demonstra+on) Agent has: hand, eye, marker Primi*ve Ac*ons: 1) move hand to eye, move eye to hand, move eye to marker move eye N, S, E, W, move eye to random object, move marker to eye, move marker to hand. If both eye and hand are on object, operate on object (e.g., push ball to marker, toggle light switch) Objects: Switch controls room lights; Bell rings and moves one square if ball hits it; Pressing blue/red block turns music on and off; Lights have to be on to see colors; Can push blocks; Money cries out if bell and music both sound in dark room Skills: (example) To make monkey cry out: Move eye to switch, move hand to eye, turn lights on, move eye to blue block, move hand to eye, turn music on, move eye to switch, move hand to eye, turn light off, move eye to bell, move marker to eye, move eye to ball, move hand to ball, kick ball to make bell ring Uses skills (op*ons): turn lights on, turn music on, turn lights off, ring bell Singh, Barto & Chentanez

Coverage of Con+nual Learning elements?

Intrinsic reward propor0onal to error in predic*on of an (salient) event according to the op*on model for that event (“surprise”); Mo0vated in part by the novelty responses of dopamine neurons; Behavior determined by this intrinsic reward. •  (Cheat) Built in salient s0muli: changes in light intensity, changes in sound intensity

Incremental Crea0on of Skills/Op0ons: Upon first occurrence of salient event create an op0on for that event and add it to skill-KB; ini0alize its policy, termina0on condi0ons, etc.

Incremental Crea0on of Predic0ons/Knowledge: Upon ini0a0ng an Op0on, ini0alize and start building an Op0on-Model

Upda0ng Skills/Knowledge: All op0ons and op0on-models are updated all the 0me using intra-op0on learning (learning mul*ple skills and knowledge in parallel)

Reuse of Skills/Knowledge to Learn Increasingly Complex Skills/Knowledge: Use model-based RL (with previously learned op0ons as ac0ons) to learn new skills/knowledge.

Hierarchy of Reusable Skills Ac0vate Toy

Turn Music On

Turn Music Off

Turn Light On

Hardwired Primi*ve Op*ons

Turn Light Off

Saccade to random object marker to eye

Ring Bell

Do the Intrinsic Mo+va+ons Help?

Discussion 1.  Learned new Skills/Op0ons 2.  Learned new knowledge in the form of predic0ons for the new Skills (op0on-models) 3.  Reused learned Skills to learn more complex Skills (and associated Knowledge) 4.  Agent got more competent over 0me at Extrinsic Reward Caveats: (Extremely) Contrived domain Intrinsic Mo0va0ons were about hard-wired salient events; very limited form of intrinsic reward. All Lookup Tables (and so scaling and catastrophic forgeFng/interference not present) Schmidhber Kaplan & Oudeyer Next: On Deriving Intrinsic Mo0va0ons Thrun & Moller Ring Others….

On the Optimal Reward Problem* (Where do Rewards Come From?)

*with NuLapong Chentanez, Andrew Barto, Jonathan Sorg, Xiaoxiao Guo & Richard Lewis

Autonomous Agent Problem •  Env. State Space S •  Agent Ac0on Space A •  Rewards R: S -> scalars •  Policy: S -> A

Environment Critic Rewards States

Actions

Agent

Agent’s purpose is to act so as to maximize expected discounted sum of rewards over a 0me horizon (the agent may or may not have a model to begin with).

Preferences-Parameters Confound •  (Most oien the) star0ng point is an agent-designer that has an objec*ve reward func*on that specifies preferences over agent behavior (it is oien way too sparse and delayed) •  What should the agent’s reward func0on be? •  A single reward func0on confounds two roles (from the designers point of view) simultaneously in RL agents 1.  (Preferences) It expresses the agent-designer’s preferences over behaviors 2.  (Parameters) Through the reward hypothesis it expresses the RL agent’s goals/purposes and becomes parameters of actual agent behavior

These roles seem dis0nct; should they be confounded?

Revised Autonomous Agent External Environment Actions Environment

Internal Environment

Critic

Critic

Rewards

Rewards

States

Actions

Sensations

Decisions

Agent

States

Agent

"Organism"

Agent reward is internal to the agent Parameters to be designed by agent-designer

Approaches to designing reward •  Inverse Reinforcement Learning (Ng et.al.)

•  Designer/operator demonstrates op0mal behavior •  Clever algorithms for automa0cally determining set of reward func0on such that observed behavior is op0mal (e.g., Bayesian IRL; Ramachandran & Amir) •  Ideal: Set agent reward = objec0ve reward (i.e., preserve the preferences parameters confound)

•  Reward Shaping

•  (Ng et.al.) agent reward = objec0ve reward + poten*al-based reward (breaks PP confound) •  Objec0ve: To achieve agent with objec0ve reward’s asympto0c behavior faster! [Also Bayesian Reward Shaping by Ramachandran et.al.]

•  Preference Elicita0on (Ideal: preserves PP confound) •  Mechanism Design (in Economics) •  Other Heuris0c Approaches

Op+mal Reward Problem

• There are two reward func0ons 1)  Agent-designer’s: objec0ve reward RO (given) 2)  Agent’s: reward RI Agent G(RI;Θ) in Environment Env produces (random) interac0on h ~ U0lity of interac0on h to agent is UI(h) = Σt RI(ht) U0lity to agent designer is UO(h) = Σt RO(ht) Op0mal Reward R*I = arg max Ri ∈{Ri } ExpEnv {Exph~ {UO (h)}} Nested Op0miza0ons; Outer reward opt.; Inner Policy opt.

Illustra+on: Fish-or-Bait E: Fixed loca0on for fish and bait A: movement ac0ons, eat, carry A: observes loca0on & food, bait when at those loca0ons & hunger-level & carrying-status Bait can be carried or eaten Fish can be eaten only if bait is carried on agent Eat fish -> not-hungry for 1 step Eat bait -> med-hungry for 1 step else hungry Agent is a lookuptable Q-learner Objec0ve u0lity: UO(h) increment of 1.0 for each fish & 0.04 for each bait eaten (but to reduce sensi0vity of precise numbers chosen we will search over addi0ve constants)

Reward Space Reward features: hunger-level (3 values) (thus generaliza0on across loca0on is built in!) Mul0ple experiments: for varying life0mes/horizons #%&

&"""

+,

()*+!,-*+./)010)+-02!0)3405!4+!)467!8.09:.2

-,

Life0me length at which agent has enough 0me to learn to eat fish with designer’s reward

%#""

#

()*+!;)3405!4+!)467!8.09:.2

!"#$%&'()"**+,+"-./

!"#$%&'(#)%*"+#,-

%"""

$#""

$%&

$ '$$$

($$$

#'$$$

#($$$

''$$$

'($$$

)'$$$

)($$$

*'$$$

*($$$

$"""

"$%&

#""

Life0me length at which agent has enough 0me to learn to eat fish with internal reward "#

" "

#"""

$""""

$#"""

%""""

> by 3.0

%#""" .',/0'$

&""""

&#"""

'""""

'#"""

#""""

!

0)%+1)-

!

(PP Confound MaQers?) Mi+ga+on Increasing Agent-Designer U0lity

Unbounded agent with confounded reward Bounded agent with op0mal reward Bounded agent with confounded reward

Policy Gradient for Reward Design (PGRD) (Sorg, Singh, Lewis; NIPS 2010)

PGRD… •  Op0mizes Reward for Planning Agents for Full depth-D planning as well as for the much more prac0cal UCT •  Computes D-step ac0on values QD(s,a) •  Selects ac0ons using Boltzmann distribu0on parameterized by ac0onvalues QD; policy denoted μ •  Agent reward is parameterized by R(:,Θ)

Deep Learning for Reward Design to Improve UCT in ATARI (IJCAI 2016)

Forward View: From Rewards to U+lity •  Monte Carlo average of root node

•  Execu0on policy of UCT

𝑄(​𝑠↓0↑𝑁 ,𝑏)= ∑​𝑖↑′ =0↑𝑁−1▒​​1↓​𝑖↑′  (​𝑠↓0↑𝑁 ,𝑏,0)/𝑛(​𝑠↓0↑𝑁 ,𝑏, 𝜇(𝑎|​𝑠↓0↑𝑁 ;𝜃)=softmax (𝑄(​𝑠↓0↑𝑁 ,𝑎;𝜃)) 0)  ∑​ℎ↑′ =0↑𝐻−1▒​𝛾↑​ℎ↑′   [𝑅(​𝑠↓​ℎ↑′ ↑​𝑖↑′  ,​𝑎↓​ℎ↑′ ↑​𝑖↑′  )+CNN(​ 𝑠↓​ℎ↑′ ↑​𝑖↑′  ,​𝑎↓​ℎ↑′ ↑​𝑖↑′  ;𝜃)]

•  UCT’s u0lity: Reward Bonuses

​𝜃↑∗ =​argmax↓𝜃  E{∑𝑡=0↑𝑇−1▒𝑅(​𝑠↓𝑡 ,​𝑎↓𝑡 )| 𝜃}

Extending previous work, Sorg, Jonathan, et al. "Reward design via online gradient ascent." NIPS. 2010.

Backward View: From U+li+es to CNN gradients •  Monte Carlo average of root node

•  Real execu0on policy of UCT in learning:

𝑄(​𝑠↓0↑𝑁 ,𝑏)= ∑​𝑖↑′ =0↑𝑁−1▒​​1↓​𝑖↑′  (​𝑠↓0↑𝑁 ,𝑏,0)/𝑛(​𝑠↓0↑𝑁 ,𝑏, 𝜇(𝑎|​𝑠↓0↑𝑁 ;𝜃)=softmax (𝑄(​𝑠↓0↑𝑁 ,𝑎;𝜃)) 0)  ∑​ℎ↑′ =0↑𝐻−1▒​𝛾↑​ℎ↑′   [𝑅(​𝑠↓​ℎ↑′ ↑​𝑖↑′  ,​𝑎↓​ℎ↑′ ↑​𝑖↑′  )+CNN(​ 𝑠↓​ℎ↑′ ↑​𝑖↑′  ,​𝑎↓​ℎ↑′ ↑​𝑖↑′  ;𝜃)]

•  UCT’s u0lity: Reward Bonuses

​𝜃↑∗ =​argmax↓𝜃  E{∑𝑡=0↑𝑇−1▒𝑅(​𝑠↓𝑡 ,​𝑎↓𝑡 )| 𝜃}

Gradient calcula0on and variance reduc0on details can be found in the paper.

Main Results: improving UCT

StarGunner StarGunner

DemonALack DemonALack Q*Bert Q*Bert

•  25 ATARI games •  20 games have ra0o larger than 1 •  Not an apples-to-apples comparison

•  ignores the computa0onal overhead for reward bonus

•  An apples-to-apples comparison •  comparison with UCT with same 0me cost per decision (i.e. deeper or wider UCT)

•  15 games have ra0o larger than 1

Breakout Breakout Amidar Amidar Alien Alien Asterix Asterix Seaquest Seaquest RoadRunner RoadRunner MsPacman MsPacman BaLleZone BaLleZone Pooyan Pooyan Robotank Robotank RiverRaid RiverRaid Carnival Carnival WizarOfWor WizarOfWor SpaceInvaders SpaceInvaders Phoenix Phoenix BankHeist BankHeist VideoPinball VideoPinball Assault Assault Berzerk Berzerk BeamRider BeamRider Cen0pede Cen0pede UpNDown UpNDown

Repeated Inverse Reinforcement Learning (for Lifelong Learning agents) Satinder Singh* Computer Science and Engineering University of Michigan May 2017 *with Kareem Amin & Nan Jiang

Inverse Reinforcement Learning [Ng&Russell’00] [Abbeel&Ng’04]

• 

Input -  Environment dynamics e.g., an MDP without a reward func0on -  Op0mal behavior e.g., the full policy or trajectories

• 

Output: the inferred reward func0on

Unidentifiability of Inverse RL

• 

Bad news: problem fundamentally ill-posed

Unidentifiability of Inverse RL [Ng&Russell’00] The set of possible reward vectors is:

use heuris0c to guess a point

• 

Bad news: problem fundamentally ill-posed

• 

Good news (?): may s0ll mimic a good policy for this task even if reward is not iden0fied And yet…

Lifelong Learning Agents An example scenario: • 

Intent: background reward func0on θ* : S → [-1, 1] -  no harm to humans, no breaking of laws, cost considera0ons, social norms, general preferences, …

• 

Mul0ple tasks: {(Et, Rt)}

ini0al distribu0on

-  Et = ⟨S, A, Pt, γ, µt⟩ is the task environment -  Rt is the task-specific reward •  Assump0on: human is op0mal in ⟨S, A, Pt, Rt + θ*, γ⟩ Can we learn θ* from op0mal demonstra0ons on a few tasks OR generalize to new ones?

Looking more carefully at unidentifiability There are two types of unidentifiability in IRL. (1) Representational Unidentifiability Should be ignored.

(2) Experimental Unidentifiability Can be dealt with.

Representational Unidentifiability

Goal of Iden0fica0on is to find canonical element of [θ*]

“Experimenter” chooses tasks Formal protocol • 

The experimenter chooses {(Et, Rt)}

• 

Human subject reveals πt* (op0mal for Rt + θ* in Et) Theorem: If any task may be chosen, there is an algorithm that outputs θ s.t. ||θ - θ*||∞ ≤ ε aier O(log(1/ε)) tasks.

“Experimenter” chooses tasks Theorem: If any task may be chosen, there is an algorithm that outputs θ s.t. ||θ - θ*||∞ ≤ ε aier O(log(1/ε)) tasks. Uncertainty [-10, 10] [0, 10] in θ*

[-10, [-10,10] 0]

+0

+0

R1

θ*

(unknown)

+4

[-10, [0, 10] 10] …

+0

-2

+8 +0 +0

fixed environment E

“Experimenter” chooses tasks Theorem: If any task may be chosen, there is an algorithm that outputs θ s.t. ||θ - θ*||∞ ≤ ε aier O(log(1/ε)) tasks. Uncertainty in θ*+R2

[0, [0,10] 5] [-5, -5

R2

θ*

(unknown)

+4

[5, [-5, 5] [0, 10]

[-5, [-10, [-5, 5] 0] 0] …

+5

-5

-2

+8 +0 +0

fixed environment E

Issue with the Omnipotent setting • Mo0va0on was the difficulty for a human to specify the reward func0on • But in the experiment, we ask: “would you want something if it costs you $X?” • Can we make weaker assump0ons on the tasks?

Nature chooses tasks Given a sequence of arbitrary tasks {(Et, Rt)} … 1.  Agent proposes a policy πt If {(Et, Rt)} never change… 2.  If near-op0mal, great! •  back to classical inverse RL (θ ≠ θ*) X * •  agent knows how to behave ✓ 3.  If not, a mistake is counted, and human demonstrates π t (op0mal for Rt + θ* in Et ) Algorithm design: how to behave (i.e., choose πt) ? Analysis: upper bound on the number of mistakes? 3 7

Value and loss of a policy Given task (E, R) where E = ⟨S, A, P, γ, µ⟩, the (normalized) value of a policy π is defined as:

which is equal to , where discounted occupancy vector ( )

Define 3 8

Reformula+on of protocol Every environment E induces a set of occupancy vectors {x(1), x(2), …, x(K)} in (“arms”). 1.  Agent proposes x. Let x* be the op0mal choice. 2.  If 〈 θ* + R, x 〉 ≥ 〈 θ* + R, x* 〉 - ε, great! 3.  If not, a mistake is counted, and x* is revealed. Formally, we use transforma0on to Linear Bandits 3 9

Algorithm outline Let θ be some guess of θ* and behave accordingly: 〈 θ , x 〉 ≥ 〈 θ , x* 〉

(1)

If a mistake is made: (2)

〈 θ* , x 〉 < 〈 θ* , x* 〉 (2) - (1) :

θ

〈 θ* - θ, x* - x 〉 > 0

x* - x

For simplicity, assume for now that R = 0

How to choose θ ?

The ellipsoid algorithm volume shrinks to e-1/2(d+1)

θ x* - x

x* does not have to be op0mal; it just has to be beLer than x

Theorem: the number of total mistakes is O(d2 log(d/ε)). 4 1

Experimenter chooses tasks

choose {(Et, Rt)} to iden0fy θ*

log(1/ε) demo’s

gap? Ω(d log(1/ε)) lower bound

Nature chooses choose {πt} to minimize loss O(d2 log(d/ε)) demo’s tasks

Zero-Shot Task Generaliza+on by Learning to Compose Sub-Tasks Sa0nder Singh Junhyuk Oh, Honglak Lee, Pushmeet Kohli

Rapid generaliza+on is key to Con+nual Learning •  Humans can easily infer the goal of unseen tasks from similar tasks even without additional learning. •  e.g.,) Pick up A, Throw B à Throw A ?

•  When the task is composed of a sequence of sub-tasks, humans can also easily generalize to unseen compositions of sub-tasks. •  e.g.,) Pick up A and Throw B à Throw B and Pick up A ?

•  Imagine a household robot that is required to execute a list of jobs. It is infeasible to teach the robot to do every possible combination of jobs.

Problem: Instruc+on Execu+on •  Given •  Randomly generated grid-world •  A list of instructions as natural language

•  Goal: execute instructions •  Some instructions require repetition of the same sub-task •  e.g.,) Pick up “all” eggs

•  Random event •  A monster randomly appears.

Instruc7on Visit cow Pick up diamond Hit all rocks Pick up all eggs

Challenges •  Solving unseen sub-tasks itself is a hard problem. •  Deciding when to move on to the next instruction. •  The agent is not given which instruction to execute. •  Should detect when the current instruction is finished. •  Should keep track of which instruction to solve.

•  Dealing with long-term instructions and random events. •  Dealing with unbounded number of sub-tasks. •  Delayed reward

Overview •  Multi-task controller: 1) execute primitive actions given a subgoal and 2) predict whether the current sub-task is finished or not. Action Terminal Sub-goal

Meta Controller

Goal

Arg 1 Arg n

Observation

Multi-task Controller

Overview •  Multi-task controller: 1) execute primitive actions given a subgoal and 2) predict whether the current sub-task is finished or not. •  Meta controller: set sub-goals given a description of a goal. Action Terminal Sub-goal

Meta Controller

Goal

Arg 1 Arg n

Observation

Multi-task Controller

Goal Decomposi+on •  A sub-goal is decomposed into several arguments.

Action Terminal Sub-goal

Meta Controller

Goal

Arg 1 Arg n

Observation

Multi-task Controller

Mul+-task Controller Architecture •  Given •  Observation •  Sub-goal arguments

action Conv

•  Do

termination

•  Determine a primitive action •  Predict whether the current state is terminal or not

Pick up

A

Analogy Making Regulariza+on •  Desirable property

Pick up A Pick up B

Visit A

à unseen Visit B

Analogy Making Regulariza+on •  Objective function (contrastive loss)

Pick up A Pick up B

Visit A

à unseen Visit B

Mul+-task Controller: Training •  Policy Distillation followed by Actor-Critic fine-tuning •  Policy Distillation: Train a separate policy for each sub-task and use them as teachers to provide actions (labels) for the multi-task controller (student) in supervised learning setting

•  Final objective •  RL objective + Analogy making + Termination prediction objective Policy Dis0lla0on or Actor-Cri0c

Binary classifica0on

Meta Controller Action Terminal Sub-goal

Meta Controller

Goal

Arg 1 Arg n

Observation

Multi-task Controller

Meta Controller Architecture •  Given •  Observation

+1 Update

0

No update

-1

Visit A Pick up B Hit C Pick up D

•  Current sub-goal Conv

•  Current instruction •  Current sub-task termination

Parameter Predic0on

•  Do •  Determine which instruction to execute •  Set a sub-goal

Analogy making

Current sub-goal

Sub-task Current termina0on Instruc0on

Meta Controller: Learning Temporal Abstrac+on •  Motivation •  The meta controller operates at a high-level (sub-goal). •  It is desirable for the meta controller to operate in larger time-scale.

•  Goal: Update the sub-goal and the memory pointer only when it is needed •  Method •  Decide whether to update the sub-goal or not (binary decision) •  If yes, update the memory pointer and update the sub-goal •  If no, continue the previous sub-goal

Meta Controller: Learning Temporal Abstrac+on Update No update

+1 0 -1

Visit A Pick up B Hit C Pick up D

Conv

Current sub-goal Sub-task termina0on

Do forward propaga0on only when update == true

Meta Controller: Learning Temporal Abstrac+on Update No update

Conv

Copy the previous sub-goal Current sub-goal Sub-task termina0on

Does it Work?

Value Predic+on Networks* Junhyuk Oh, Sa0nder Singh, Honglak Lee *Under Review (on arXiv in Late July, 2017)

Mo+va+on •  Observa0on Predic0on (Dynamics) Models are difficult to build in high-dimensional domains. •  We can make lots of predic0on at different temporal scales •  So, how do we plan without predic0ng observa0ons? VPNs are heavily inspired by Silver et.al’s Predictron Predictron was limited to Policy Evalua0on VPNs extend to Learning Op0mal Control

VPN: Architecture

One Step Rollout

Mul0 Step Rollout

Planning in VPNs

Learning in VPNs

Collect Domain: Results 1

Domain

DQN Traj.

VPN Traj.

Collect Domain: Results 2

VPN Plan (20 steps)

VPN Plan (12 steps)

Collect Domain: Comparisons Average reward

8.5 8.0 7.5 7.0 6.5 6.0 0

10

20

30 40 Epoch

50

60

GreeGy ShorteVt DQ1 231(1) 231(2) 231(3) 231(5) V31(1) V31(2) V31(3) V31(5)

VPN: Results on ATARI Games 4000

FroVtEite

6000

3500

2000 1000

0

3000

50

100

150

200

0V. 3DFPDQ

50

100

150

200

APidDr

400 300

1000

200

500

100 50

100

150

200

0

0

500

4000

50

500

1500

8000 6000

100

600

2000

0

0

1000

150

700

2500

0

0

12000 10000

200

1000

500

1500

250

2000

50

100

150

200

0

4Bert

16000 14000

300

3000

1500

AlieQ

2000

350

4000

2500

EQduro

400

5000

3000

0

6eDqueVt

2000 0

18000 16000 14000 12000 10000 8000 6000 4000 2000 0 0

50

100

150

200

.rull

0

0

60000

50

100

150

200

30000 20000 10000 150

200

0

0

50

D41 V31

40000

100

0

CrDzy CliPEer

50000

50

0

50

100

150

200

100

150

200

Ques+ons?

Singh - Steps Towards Continual Learning.pdf

(e.g., push ball to marker, toggle light switch). Objects: Switch controls room lights; Bell rings and moves one square if ball hits it;. Pressing blue/red block turns ...

4MB Sizes 2 Downloads 163 Views

Recommend Documents

Continual harvesting fp.pdf
Potting mix ( Fafard #2 peat/perlite mixed with turface proleague clay at a 50:50. ratio). Fertilizer (a combination of Nutricote 18 -6-8 T100 and T180 in different ...

Steps towards the development of a certification system ...
response to address public concerns related to deforesta- tion in the tropics, .... certificate. Track and trace system for biomass; developed by Essent, energy utility in .... (UN Commission of Sustainable Development) Method for development of.

Manmohan Singh - Visva Bharati
Topic: India's Export Trends and Prospects for Self-. Sustained Growth. [Published ... Japan's leading business daily. 1996 Honorary. Professor, ... Sept 1982 – Jan 1985: Governor, Reserve Bank of India. April 1980 – Sept 1982: Member-Secretary,

Mantej Singh Dhanjal - GitHub
07/2014 - 10/2014. Accenture. Associate ... Drove UI testing on Bluefly's Mobile app and website. Logged ... 10 tips on how Bluefly can use Social Media for lead.

Dynamics Based Control and Continual Planning - Semantic Scholar
time-line. It is hoped that analytical tools will assist (PO)MDPs to become more ... Although the analytical properties of the system description allow control the-.

Maharaj Singh Order.pdf
For Respondent(s) Mr. Ashok Desai,Sr.Adv. Ms. Rukhmini Bobde,Adv. Mr. Abhiram Naik,Adv. Ms. Mohuna Thakur,Adv. M/S. Parekh & Co., AOR. UPON hearing the counsel the Court made the following. O R D E R. Delay condoned. The special leave petition is dis

Ritesh Singh Rajpoot.pdf
~Google Analytics ~Google Webmaster ~ Google Merchant Centre. ~ Gupshup messenger bot builder ~Many Chat fb messenger bot builder ~ MOZ. ~Wordpress ~Shopify ~Joomla. ~Blogger ~HTML ~Proshow Video Maker. Ritesh Singh Rajpoot. Phone:09630784804. E-Mail

Amardeep Singh Saini.pdf
under the University of Mumbai. By. Mr. Amardeep Singh Saini. Class & Roll No: MMS-A-52. Specialization: Finance. Batch: 2013-15. Under the Guidance of.

Bikram-Singh-Judgment.pdf
Page 1 of 60. ITA No.55/2017 Page 1 of 25. $~. * IN THE HIGH COURT OF DELHI AT NEW DELH. + ITA 55/2017. Reserved on: 03rd August, 2017. Date of decision: 25th August, 2017. PRINCIPAL COMMISSIONER OF. INCOME TAX – 7 ..... Appellant. Through: Mr. Ruc

Ajay Singh Chautala.pdf
had been admitted to Medanta, Medicity Hospital, Gurgaon. An additional. ground for rejection ... IN. Page 3 of 6. Main menu. Displaying Ajay Singh Chautala.pdf.

Virabhadra Singh - Taxscan.pdf
Page 1 of 58. High Court of H.P.. IN THE HIGH COURT OF HIMACHAL PRADESH. SHIMLA. CWP No. 3077 of 2016 alongwith. CWP No. 3079 of 2016.

Indian-Economy-Ramesh-Singh-7thEdition.pdf
Page 1 of 824. McGRAW HILL EDUCATION SERIES. COCK. INDIAN. ECONOMY. For Civil Services Examinations. S E V E N T H E D I T I O N. Page 1 of 824 ...

Navjot singh Sidhu Judgment.pdf
CRIMINAL APPEAL NO.58 OF 2007 ... injuries: 1) 0.75 cm x 0.5 cm abrasion present over left temporal. region at ... Displaying Navjot singh Sidhu Judgment.pdf.

chatur singh two star 720p.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. chatur singh two ...

Dynamics Based Control and Continual Planning - Semantic Scholar
us to create a great variety of visual effects applied and utilised by Computer. 2This is, in fact, .... to what degree the system is controllable, that is, what kind of behaviours it is possible to ...... Computer Science, University of California a

PEM: A Framework Enabling Continual Optimization ...
process definition from the information technology (IT) ... Over 100 Workflow Management System (WfMS) ... For example, a monolithic BVF can be associated.

Steps towards ''putting recovery at the heart of all ... -
On June 5 2008 Devon Partnership NHS Trust (DPT) hosted the third in the ... for Mental Health's (SCMH) series of exploratory workshops with the aim of ...

870 DIGVIJAY SINGH CHOUHAN.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect ...